0% found this document useful (0 votes)
22 views54 pages

BasicPhysicalChemistryII Lotstedt

This document outlines the course 'Basic Physical Chemistry II, part 1' taught by Erik Löstedt, detailing course logistics, lecture topics, and examination requirements. The syllabus covers key areas such as time-dependent quantum mechanics, electronic structure, light-molecule interactions, and simulation methods. It also provides references and resources for further reading and emphasizes the importance of attendance and problem-solving for grading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views54 pages

BasicPhysicalChemistryII Lotstedt

This document outlines the course 'Basic Physical Chemistry II, part 1' taught by Erik Löstedt, detailing course logistics, lecture topics, and examination requirements. The syllabus covers key areas such as time-dependent quantum mechanics, electronic structure, light-molecule interactions, and simulation methods. It also provides references and resources for further reading and emphasizes the importance of attendance and problem-solving for grading.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

Basic Physical Chemistry II, part 1

Erik Lötstedt

Contents
1 Course details 2
1.1 Lecturer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Lecture dates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Lecture room . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Syllabus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.5 Textbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.6 Reference literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.7 Examination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Time-dependent quantum mechanics 4


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Wave packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Wave packets: Continuum spectrum . . . . . . . . . . . . . . . . . . . . . 11

3 Electronic structure and potential energy curves 16


3.1 Born-Oppenheimer approximation . . . . . . . . . . . . . . . . . . . . . . 16
3.2 Explicit example: H2 + . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Light-molecule interaction 24
4.1 Weak-field interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2 Comparison of perturbation theory and numerically exact results . . . . . 29
4.3 Adiabatic (field-following) state approach . . . . . . . . . . . . . . . . . . 34
4.4 Comparison of the adiabatic state approach and numerically exact results 37

5 Simulation methods 40
5.1 Dimensionless form of the Schrödinger equation . . . . . . . . . . . . . . 40
5.2 Finite difference method . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.3 Basis expansion method . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.4 Time evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

6 Physical constants and unit conversions 53

7 References 54

Last revised on May 28, 2024

1
1 Course details
1.1 Lecturer
Erik Lötstedt (Associate Professor, Quantum Frontiers laboratory, Department of Chem-
istry, School of Science, The University of Tokyo)
Email: [email protected]
Yamanouchi group webpage: yamanouchi-lab.org/e/

1.2 Lecture dates


June 4 (Tue), June 11 (Tue), June 18 (Tue), June 25 (Tue) 2024; 10:25 – 12:10

1.3 Lecture room


1402, chemistry main building

1.4 Syllabus
June 4
ˆ Time-dependent quantum mechanics (Sec. 2 on page 4)
June 11
ˆ Electronic structure and potential energy curves (Sec. 3 on page 16)

ˆ Light-molecule interaction (Sec. 4 on page 24)


June 18
ˆ Light-molecule interaction, continued (Sec. 4 on page 24)
June 25
ˆ Simulation methods (Sec. 5 on page 40)

1.5 Textbook
The lectures are based on the material in these notes.

1.6 Reference literature


ˆ Some material in this lecture is based on the book Introduction to quantum mechan-
ics: A time-dependent perspective, D. J. Tannor (University Science Books, 2007);
ISBN 1-891389-23-8.

ˆ A good book on quantum mechanics with many examples of applications to molecules


is Quantum Mechanics of Molecular Structures, K. Yamanouchi (Springer, 2012);
ISBN 978-3-642-32380-5.

ˆ In case you need to recall the basics of quantum mechanics, Chemistry LibreTexts
has a freely available set of lectures.

2
ˆ The authoritative reference for the numerical values of different physical constants
such as Planck’s constant or the electron mass is the NIST (National Institute of
Standards and Technology in Gaithersburg, MD) website, see
physics.nist.gov/cuu/Constants/index.html. For convenience, we have collected the
numerical constants used in these lecture notes in the final section 6.

ˆ In some places, we refer to the original scientific literature. To access scientific jour-
nals from home, check lib.u-tokyo.ac.jp/en/library/literacy/user-guide/campus/offcampus
(no special setup is required; you login with your UTokyo account).

1.7 Examination
Attendance is checked for each class via UTOL. One or two problems are announced
at the end of each class. The solutions to the problems should be submitted via UTOL
(Assignments) until Monday the week after the lecture. Any readable format is acceptable
(Word, Latex, photograph/scan of handwritten solutions, . . . ) The grade is based on the
attendance and on the problem solutions. To get credit for part 1, you have to submit
solutions each week (4 times).

3
2 Time-dependent quantum mechanics
2.1 Introduction
All time-dependent phenomena in quantum mechanics can (in principle) be described by
the time-dependent Schrödinger equation (TDSE)

∂Ψ(q, t)
iℏ = H(t)Ψ(q, t), (1)
∂t

where i = −1 is the imaginary unit, ℏ = h/(2π) ≈ 1.05457 × 10−34 J s is the reduced
Planck constant, Ψ(q, t) is the time-dependent wave function, and H(t) is the time-
dependent Hamiltonian. q is a general coordinate, which (for example) can be the spatial
coordinates of a particle or an internuclear distance. The precise form of the Hamiltonian
depends on the physical system we are trying to model.
Two examples of Hamiltonians which come up in introductory classes of quantum
mechanics are the harmonic oscillator (HO) Hamiltonian and the Hamiltonian of the
hydrogen atom. The harmonic oscillator Hamiltonian reads

ℏ2 ∂ 2 1
HHO = − 2
+ µω 2 x2 , (2)
2µ ∂x 2
where µ and ω are constants. The HO Hamiltonian is often used as a model of a vibrating
diatomic molecule. In this case, µ = m1 m2 /(m1 +m2 ) is the reduced mass (mk is the mass
of atom k), ω is the vibrational frequency, and x = R−Req represents the deviation of the
internuclear distance R from the equilibrium internuclear distance Req . The Hamiltonian
of the hydrogen atom reads

ℏ2 2 e2
HH = − ∇ − , (3)
2me 4πε0 r
where me is the mass of the electron, e is the magnitude of the charge of the electron (a
positive number;
p the electron charge is of course negative), ε0 is the vacuum permittivity,
and r = x2 + y 2 + z 2 is the distance between the electron and the proton.
Both Hamiltonians (2) and (3) are independent of time. However, if we want to
consider the interaction with a time-dependent field, such as the electric field of a laser
pulse, we have to add a time-dependent potential term to the Hamiltonian. If we assume
a laser field F (t) = F (t)ez along the z direction, the time-dependent HO Hamiltonian is

ℏ2 ∂ 2 1 2 2
HHO (t) = − + µω x − d(x) cos θF (t), (4)
2µ ∂x2 2

where d(x) is the dipole moment of the molecule, and θ is the angle between the molecular
axis and the field. In the case of the hydrogen atom, we have

ℏ2 2 e2
HH (t) = − ∇ − + eF (t)z. (5)
2me 4πε0 r
The TDSE (1) is an initial value problem, which means that the solution Ψ(q, t) for
all times t is determined uniquely by the initial wave function Ψ(q, t = t0 ). Usually, we
take t0 = 0.

4
The physical meaning of the wave function is that the squared wave function defines a
probability density: The probability p(q, t) of finding the system at time t in a coordinate
range between q and q + dq, where dq is small is

p(q, t) = dq|Ψ(q, t)|2 . (6)

The probability of finding the system in a finite range of q1 ≤ q ≤ q2 is


Z q2
p12 (t) = dq|Ψ(q, t)|2 . (7)
q1

The total probability of finding the system in the complete range of coordinates should
be 1, Z
dq|Ψ(q, t)|2 = 1. (8)

The range of the integration in Eq. (8) Rdepends on the problem. In the case of the

harmonic oscillator, we should integrate −∞ |Ψ(x, t)|2 dx, and the case of the hydrogen
atom, we should integrate
Z ∞ Z ∞ Z ∞ Z ∞ Z π Z 2π
2 2
dx dy dz|Ψ(r, t)| = r dr sin θdθ dϕ|Ψ(r, t)|2 . (9)
−∞ −∞ −∞ 0 0 0

In quantum mechanics, the Hamiltonian operators are always Hermitian, which means
that Z Z ∗
∗ ∗
dqΨ1 (q, t)H(t)Ψ2 (q, t) = dqΨ2 (q, t)H(t)Ψ1 (q, t) (10)

is satisfied for any pair of wave functions Ψ1 (q, t), Ψ2 (q, t). The order of the wave functions
in the integral can be exchanged, but we have to add a complex conjugate. Another short
form of writing Eq. (10) using the bra-ket notation, is

⟨Ψ1 (t)|H(t)|Ψ2 (t)⟩ = ⟨Ψ2 (t)|H(t)|Ψ1 (t)⟩∗ . (11)

We usually use the bra-ket notation since it is simpler. When we see a bra-ket, we just
remember that it is a short notation for the integral over the coordinates. For example,
Eq. (8) can be expressed in bra-ket notation as

⟨Ψ(t)|Ψ(t)⟩ = 1. (12)

The Hermiticity of the Hamiltonian ensures that (8) is satisfied at all times, that is,
the total probability is conserved and does not change as a function of t. The probability
distribution p(q, t), can, as we shall see, change with time, even if the Hamiltonian itself
does not depend on time.

d
Exercise 1. Show that Eq. (8) holds for all t by calculating dt
⟨Ψ(t)|Ψ(t)⟩. You
have to use Eqs. (1) and (11).

5
2.2 Wave packets
How do we solve the TDSE (1)? In the case of a time-independent Hamiltonian [like (2)
or (3)], so that the TDSE reads

∂Ψ(q, t)
iℏ = HΨ(q, t), (13)
∂t
we can use separation of variables, and assume a solution on the form

Ψ(q, t) = f (t)ψ(q), (14)

where we have denoted the general coordinate(s) with q, f (t) is a time-dependent function
which does not depend of q, and ψ(q) is a time-independent function. Insertion of the
ansatz (14) into the TDSE results in

Ψ(q, t) = e−iEt/ℏ ψ(q), (15)

where ψ(q) should satisfy


Hψ(q) = Eψ(q). (16)

Exercise 2. Derive Eq. (15).

Equation (16) is the time-independent Schrödinger equation, which is the equation


that usually shows up in textbooks and introductory classes on quantum mechanics. In
fact, as shown above, the time-independent equation (16) is a special case of the TDSE
(1).
A very important (perhaps the most important) property of the TDSE (1) is that it
is a linear equation. This means that if we can find two independent solutions Ψ1 (q, t)
and Ψ2 (q, t) satisfying Eq. (1), we can construct a third solution Ψ3 (q, t) according to

Ψ3 (q, t) = N [c1 Ψ1 (q, t) + c2 Ψ2 (q, t)], (17)

where N is a normalization constant chosen so that ⟨Ψ3 (t)|Ψ3 (t)⟩ = 1, and c1,2 are
arbitrary constant (time-independent) complex numbers.

Exercise 3. Check that the wave function Ψ3 (q, t) defined in Eq. (17) satisfies the
TDSE.

Exercise 4. Calculate the normalization constant N in Eq. (17).

For time-independent Hamiltonians, the linear property of the TDSE means that we
can find a general solution to the time-dependent problem if we know the eigenfunctions

6
of the Hamiltonian. This works as follows. First, we assume that all the eigenfunctions
together with the corresponding eigenenergies of the Hamiltonian are known,

Hψn (q) = En ψn (q). (18)

In the case of the hydrogen atom, ψn (q) are the s, p, d,. . . orbitals, and in the case of the
harmonic oscillator, ψn (q) are the HO eigenfunctions (proportional to a Hermite p poly-
2 2
nomial hn (ξ) times a Gaussian function e−ξ /2 , ψ(ξ) = e−ξ /2 hn (ξ), with ξ = µω/ℏx).
The eigenfunctions in Eq. (18) are orthonormal,
(
1, if n = m
⟨ψn |ψm ⟩ = δmn = (19)
0, otherwise,

and form a complete set, which means that an arbitrary function can be expanded as a
linear combination of the ψn (q)’s. This means that we can write
X
Ψ(q, t = 0) = cn ψn (q), (20)
n

where the coefficients are given by

cn = ⟨ψn |Ψ(t = 0)⟩. (21)

By considering Eqs. (15) and (17), we see that the solution Ψ(q, t) to the TDSE is
X
Ψ(q, t) = cn e−iEn t/ℏ ψn (q). (22)
n

A wave function like (22), which is a superposition of several eigenstates is referred to as


a wave packet.

Exercise 5. Derive Eq. (21).

Exercise 6. Check that Ψ(q, t) in Eq. (22) solves the TDSE.

As is clear from the form of the general solution (22), the wave function Ψ(q, t)
depends on time, even though the Hamiltonian itself is time-independent. If we calculate
the probability distribution |Ψ(q, t)|2 , we obtain
X
|Ψ(q, t)|2 = c∗m cn e−i(En −Em )t/ℏ ψm

(q)ψn (q), (23)
m,n

which shows that |Ψ(q, t)|2 in general is a time-dependent quantity. The exception is the
case where the sum over n in Eq. (22) runs over one value n0 only, that is

Ψ(q, t) = cn0 e−iEn0 t/ℏ ψn0 (q). (24)

7
Here we should put cn0 = 1 so that the wave function is normalized to one, ⟨Ψ(t)|Ψ(t)⟩ =
1. This means that the wave function is an eigenstate of the Hamiltonian. In this case
the probability density becomes

|Ψ(q, t)|2 = |ψn0 (q)|2 , (25)

which is time-independent.

Exercise 7. Derive Eqs. (23) and (25).

For a general observable O(q) (for example, O(q) = q, or O(q) = H), we have in a
similar way X
⟨Ψ(t)|O|Ψ(t)⟩ = c∗m cn e−i(En −Em )t/ℏ ⟨ψm |O|ψn ⟩, (26)
m,n

which is in general time-dependent. The physical meaning of ⟨Ψ(t)|O|Ψ(t)⟩ is the expec-


tation value of O at time t, that is, the average value of O at time t. In the case of the
harmonic oscillator, for O = x, ⟨Ψ(t)|x|Ψ(t)⟩ is the average value of the coordinate x at
time t.
Let’s consider the simplest time-dependent wave packet consisting of a superposition
of two eigenstates,
1
Ψ(t, q) = √ ψ0 (q)e−iE0 t/ℏ + ψ1 (q)e−iE1 t/ℏ .

(27)
2
The time-dependent probability density can be calculated as

|ψ0 (q)|2 |ψ1 (q)|2


 
2 ∆Et
|Ψ(t, q)| = + + ψ0 (q)ψ1 (q) cos , (28)
2 2 ℏ

where ∆E = E1 − E0 , and we have assumed that ψ0 (q) and ψ1 (q) are real functions.

Exercise 8. Derive Eq. (28).

In Eq. (28), we see that there


P are two contributions to the probability density: one
time-independent part given by n=1,2 |ψn (q)|2 /2, and one time-dependent part given by
ψ0 (q)ψ1 (q) cos (∆Et/ℏ). The time-dependent part is a periodic function with period
2πℏ
T = , (29)
∆E
meaning that the density returns to its initial (t = 0) value at t = T , 2T , . . . . The period
is proportional to the inverse of the energy difference ∆E. This means that large energy
differences lead to fast dynamics, and small energy differences lead to slow dynamics.
In general, the typical time scale of a dynamical process in quantum mechanics can be
estimated by Eq. (29) if we know the typical energy scale.
We illustrate Eq. (28) in the case of the harmonic oscillator. We take ψ0 (x) and
ψ1 (x) to be the ground state and first excited state of the harmonic oscillator, with

8
ψ n(x) (Å-1/2) 2 (a) n=0
0 n=1
-2
-0.5 0 0.5
x (Å)
-1
Prob. dens. (Å ) Prob. dens. (Å ) Prob. dens. (Å )

5
(b) t = 0 fs

0
-0.5 0 0.5
x (Å)
-1

5
(c) t = 1.9 fs

0
-0.5 0 0.5
x (Å)
-1

5
(d) t = 3.8 fs

0
-0.5 0 0.5
x (Å)

Figure 1: (a) Ground state (n = 0) and first excited state (n = 1) of the harmonic
oscillator. (b) – (d) Snapshots of the time-dependent probability density |Ψ(t, x)|2 =
2
1
2
ψ0 (x)e−itE0 /ℏ + ψ1 (x)e−itE1 /ℏ at (b) t = 0, (c) t = πℏ/(2∆E) ≈ 1.9 fs, and (d)
t = πℏ/∆E ≈ 3.8 fs.

9
harmonic frequency ω = 2πc · 4401 cm−1 (c is the speed if light) and reduced mass
µ = mp /2 = 0.84 × 10−27 kg, corresponding to the H2 molecule. The energy levels of the
harmonic oscillator are given as En = ℏω(n + 1/2), so ∆E = ℏω, and the period becomes
T = 2πℏ/∆E = 1/(c · 4401 cm−1 ) ≈ 7.5 fs. The quantum vibration of a H2 molecule
takes place on a 10-fs time scale.
Snapshots of the oscillating wave packet are shown in Fig. 1. At t = 0, the wave
packet is localized to the right side, but after a half period t = T /2 = πℏ/∆E, the wave
packet is localized on the left side.

Exercise 9. Consider a hydrogen atom in a superposition of the 1s ground state


and the 2pz excited state. How long is the period T for the time-dependent motion
of the wave packet? Make a sketch of the time-dependent probability density.

An alternative way of expressing the general solution to the TDSE (13) in the case of
a time-independent Hamiltonian is to write

Ψ(q, t) = e−itH/ℏ Ψ(q, 0), (30)

where Ψ(q, 0) is an arbitrary initial wave function. The exponential of the Hamiltonian
is defined by the Taylor expansion (remember that H is an operator)

X (−it)k it t2
e−itH/ℏ = Hk = 1 − H − 2 H2 + · · · . (31)
k=0
ℏk k! ℏ 2ℏ

If we expand the initial wave function Ψ(q, 0) in terms of the eigenstates ψn (q) as
X
Ψ(q, 0) = cn ψn (q), (32)
n

we obtain
X X
Ψ(q, t) = e−itH/ℏ Ψ(q, 0) = e−itH/ℏ cn ψn (q) = cn e−iEn t/ℏ ψn (q), (33)
n n

which is the same as Eq. (22).

Exercise 10. Check that Ψ(q, t) defined in Eq. (30) satisfies the TDSE by differen-
tiating the series (31) term by term.

Exercise 11. Check Eq. (33).

10
2.3 Wave packets: Continuum spectrum
Usually, there are two types of eigenfunctions of the time-dependent Hamiltonian, bound
states and continuum states. The bound states are discrete, and can be characterized by a
set of quantum numbers. Bound states are localized, that is, their wave function becomes
very small outside the bound region. Mathematically, bound states can be normalized as

⟨ψn |ψn ⟩ = 1. (34)

In the case of the 1-D harmonic oscillator, there is only one quantum number, n, and all
states are discrete. In the case of the hydrogen atom, three quantum numbers n, ℓ, and
m are necessary to classify the bound states.
For many potentials (in all real systems) there are also continuum states. A simple
example is the hydrogen atom, where the potential reads

e2
V (r) = − . (35)
4πε0 r
In this case, the bound solutions to the time-independent Schrödinger equation

HH ψnℓm (r) = En ψnℓm (r) (36)

have the energy


1 me e2 13.6
En = − 2 2 2
= − 2 eV. (37)
2n (4πε0 ) ℏ n
We have En < 0. There are also solutions where the energy is larger than 0, these are
the continuum solutions. They are not localized to close to the origin, but continue to
oscillate as a function of r even for large values of r. An illustration of this behavior is
shown in Fig. 2.
The radial Schrödinger equation reads

ℏ2 e2
   
1 d 2 d ℓ(ℓ + 1)
− r − ψkℓ (r) − ψkℓ (r) = Ek ψkℓ (r), (38)
2me r2 dr dr r2 4πε0 r

where the energy E is positive, E > 0. The complete wave function is given by

Ψkℓm (r) = ψkℓ (r)Yℓm (θ, ϕ), (39)

where Yℓm (θ, ϕ) is a spherical harmonic. The solutions ψkℓm (r) are labeled with three
quantum numbers, ℓ and m just like in the case of the bound states, and k, which is the
wave number, defined in terms of the energy as

2me Ek
k= . (40)

The point is here that the wave number k is a continuous variable, not discrete, meaning
that k can take any value.
The continuum solutions are not normalized in the usual way as in Eq. (34), but
satisfy
⟨ψkℓm |ψk′ ℓ′ m′ ⟩ = δ(k − k ′ )δℓℓ′ δmm′ , (41)
where δ(k) is a Dirac delta function, and δℓℓ′ and δmm′ are Kronecker delta functions.

11
Figure 2: Illustration of the bound (top panel) and the continuum (bottom panel) 2p
states (ℓ = 1, m = 0) of the hydrogen atom. The radial wave function ψ(r) is shown.

12
In order to deepen our understanding of continuum states, let us consider the simplest
possible system: the free particle (no potential) in one dimension. The time-independent
Schrödinger equation reads

ℏ2 d2
− ψ(x) = Eψ(x), (42)
2m dx2
where m is the mass of the particle. The solutions of Eq. (42) are
1 ℏ2 k 2
ψk = √ e±ikx , Ek = , (43)
2π 2m
where the factor 1/2π is inserted so that to obtain the normalization
⟨ψk |ψk′ ⟩ = δ(k − k ′ ). (44)
To create a wave packet, we consider the superposition
Z ∞
Ψ(x, t) = dka(k)ψk (x)e−iEk t/ℏ , (45)
−∞

where a(k) is an arbitrary function of k. The wave packet (45) solves the TDSE, with
the initial condition Z ∞
Ψ(x, 0) = dka(k)ψk (x). (46)
−∞
Note that the plane-wave solutions (43) are not localized wave functions: ψk (x) extends
from x = −∞ to x = ∞. However, we can create a localized wave function by taking a
superposition of many plane waves. An instructive example is the Gaussian wave packet.
In this case, we consider the expression (45) and take the amplitudes
r
N π − 4α
k2
a(k) = √ e 0 (47)
2π α0
with the normalization factor 1 
2α0 4
N= . (48)
π
We assume that α0 is a real and positive number. The amplitudes (47) constitute a
Gaussian function in k-space. When we insert the expression (47) into Eq. (45) and
calculate the integral, we obtain
N 2
Ψ(x, t) = p e−α(t)x , (49)
1 + 2iℏα0 t/m
where
α0 (1 − 2iℏα0 t/m)α0
α(t) = = . (50)
1 + 2iℏα0 t/m 1 + 4ℏ2 α02 t2 /m2

Exercise 12. Derive Eq. (49) by calculating the integral (45) with a(k) given by
Eq. (47). You may use the general Gaussian integral formula
Z ∞ r
−ax2 +ibx+ic π ic− b2
e dx = e 4a , (51)
−∞ a

which is valid if Re(a) > 0.

13
Figure 3: Broadening of a Gaussian wave packet with time. The wave function at t = 0
is the vibrational ground state function of H2 .

Exercise 13. Check that the wave function (49) satisfies the TDSE iℏ∂Ψ/∂t =
−(ℏ2 /2m)∂ 2 Ψ/∂x2 .

Exercise 14. CheckR ∞that the wave function (49) is normalized to 1, that is check
that ⟨Ψ(t)|Ψ(t)⟩ = −∞ Ψ∗ (x, t)Ψ(x, t)dx = 1 for all t. Again, the formula (51) is
useful here.

At t = 0, the Gaussian wave packet (49) takes the form


2
Ψ(x, t = 0) = N e−α0 x , (52)

which is a Gaussian function with width σ = 1/ α0 . As we can see in Eq. (49), as
time goes
p on, the wave function remains a Gaussian, but the width increases in time as
σ(t) = (1 + 4ℏ2 α02 t2 /m2 )/α0 . The characteristic time scale for spreading of the wave
packet is therefore given by t such that the term ℏ2 α02 t2 /m2 is of the order of unity, that
is t ∼ m/(ℏα0 ). Let’s consider the example of the H2 molecule again. The vibrational
ground state wave function can, to a good approximation, be represented by a Gaussian
µωx2
Ψ0 (x, t) = N e− 2ℏ , (53)

which means that α0 = µω/(2ℏ) in this case. If we now imagine that for some reason, the
harmonic confinement disappears because the chemical bond is broken in the molecule
(for example, by ionization or excitation of the molecule), then the wave packet will
start to spread out on a time scale t ∼ µ/(ℏα0 ) = 2/ω. In the case of H2 , we have
ω = 2πc × 4401 cm−1 ≈ 8 × 1014 s−1 , so t ∼ 2 × 10−15 s = 2 fs. An illustration is shown
in Fig. 3.

14
We may consider a more general Gaussian wave packet having also a momentum in a
certain direction. This means that the center of the wave packet changes. The expression
for the wave function is
N 2 i ip2
0
Ψ(x, t) = p e−α(t)[x−χ(t)] + ℏ p0 [x−χ(t)]+ 2ℏm t , (54)
1 + 2iℏα0 t/m

where x0 and p0 are parameters (which do not depend on x and t), α(t) is defined in (50),
and
p0 t
χ(t) = x0 + . (55)
m

Exercise 15. Check that the wave function (54) satisfies the TDSE iℏ∂Ψ/∂t =
−(ℏ2 /2m)∂ 2 Ψ/∂x2 .

The interpretation of the wave packet (54) is the following. At t = 0, the wave
p √
function is centered at χ(0) = x0 , having the width σ = 1/ α(0) = 1/ α0 . As time
goes on, the center of the wave packet moves so the center is at χ(t) = x0 + pm0 t . The
center moves with speed p0 /m, corresponding
p to ap
momentum p0 . The width of the wave
packet increases in time as σ(t) = 1/ Re α(t) = (1 + 4ℏ2 α02 t2 /m2 )/α0 . The Gaussian
wave packet given by Eq. (54) is perhaps the closest we may get to a free classical particle
in quantum mechanics.

15
3 Electronic structure and potential energy curves
In this section, we learn how the idea of vibrational motion of molecules on potential
energy curves can be derived from first principles. The key idea is the Born-Oppenheimer
approximation, which is essentially the idea that the nuclei are much heavier than the
electrons, and therefore the motion of the electrons follows that of the nuclei. When we
calculate the electronic wave functions, we can consider the motion of the nuclei to be
frozen.

3.1 Born-Oppenheimer approximation


The total time-dependent Hamiltonian for a molecule with Ne electrons and Nn nuclei
having masses M1 , M2 , . . . , MNn and nuclear charge numbers Z1 , Z2 , . . . , ZNn reads

Hmol = He + Hn . (56)

The electronic Hamiltonian is


He (t) = He0 + Ve (t), (57)
where
Ne
" Nn
#
X ℏ2 2 e2 X Zl
He0 = − ∇rk −
k=1
2me 4πε0 l=1 |rk − Rl |
Ne X Nn X
e2 X 1 e2 X Zk Zl
+ + (58)
4πε0 k=1 l<k |rk − rl | 4πε0 k=1 l<k |Rk − Rl |

is the time-dependent part,


Ne
X
Ve (t) = eF (t) · rk (59)
k=1

represents the coupling with the external time-dependent electric field F , and the nuclear
Hamiltonian is

Hn = Tn + Vn (t)
Nn 
ℏ2 2
X 
= − ∇ − eF (t) · Rl . (60)
l=1
2Ml Rl

The electron coordinates are denoted by rk and the nuclear coordinates are denoted
by R l . The electric field is denoted by F (t). Note that the nuclear repulsion term
e2 Nn P Zk Zl
P
4πε0 k=1 l<k |Rk −Rl | is included in the electronic Hamiltonian, the reason for this will
be clear later.

Exercise 16. Give the physical meaning of each term in the Hamiltonians (57)
[including Eqs. (58) and (59)] and (60).

16
The total Hamiltonian (56) is the complete Hamiltonian for a molecule, and can
be said to be a “theory of everything” (not including, of course, the strong or weak
interactions). However, the time-dependent Schrödinger equation
∂Ψ(r1 , r2 , . . . , R1 , R2 , . . . , t)
iℏ = Hmol Ψ(r1 , r2 , . . . , R1 , R2 , . . . , t) (61)
∂t
is too complicated to solve directly. We must make approximations. The standard
approximation in molecular quantum mechanics is the Born-Oppenheimer (BO) approx-
imation [1]. The basic idea of the BO approximation is that the electronic motion is
much faster than the nuclear one, and therefore it makes sense to calculate the electronic
and nuclear wave functions separately. First, we obtain the electronic wave functions
assuming that the nuclear motion is frozen. This defines a potential energy curve (the
electronic energy as a function of the positions of the nuclei). Secondly, the nuclear wave
functions are calculated using the potential energy curve as a potential in the Schrödinger
equation.
The first step in the BO approximation is to assume that at each value of the inter-
nuclear coordinates
R = (R1 , R2 , . . . , RNn ), (62)
we have a set of solutions Φn (x; R) to the electronic time-independent Schrödinger equa-
tion (no electric field)
He0 Φn (x; R) = En (R)Φn (x; R), (63)
where x collectively denotes all of the electronic coordinates,
x = (x1 , x2 , . . . , xNe ), (64)
and xk = (rk , ωk ) is the combined spatial coordinate rk and spin coordinate ωk of elec-
tron k. In Eq. (63), the nuclear coordinates R are fixed, and treated as parameters.
The eigenenergies are denoted by En (R); they depend parametrically on the nuclear co-
ordinates R, and there are in principle an infinite number of electronic states, ranging
from the ground state E0 (R), the first excited state E1 (R), and so on to continuum states
Ek (R). The set of electronic states form a complete, orthonormal set,
⟨Φn (x; R)|Φk (x; R)⟩ = δnk , (65)
where the integration is taken over the electronic coordinates x only. The orthonormality
relation (65) holds at each value of the nuclear coordinates R.
Approximate solutions to Eq. (63) can be calculated by quantum chemistry software
packages. There are many such packages available, a few are given in Table 1. Typi-
cally, the electronic wave function Φn (x; R) is expanded in a linear combination of Slater
determinants ψI , X
Φn (x; R) = CI ψI (x; R), (66)
I
and the Slater determinants are written as
ψI (x; R) = |φI1 φI2 · · · φINe |, (67)
where φj (x; R) are single-particle spin-orbitals, depending on the electron coordinate r
and the spin coordinate ω [recall that x = (r, ω)]. The spin-orbitals are expanded using
basis functions fk (r; R),
X
φj (x; R) = cjk fk (r; R)σ(ω), (68)
k

17
Commercial
Name URL Comment
or freeware
Gaussian commercial gaussian.com Widely used.
See Ref. [2]. Registration is
GAMESS freeware msg.chem.iastate.edu/gamess
required before download.
Good for calculations of
Molpro commercial molpro.net excited states in small
molecules.
Includes an interface to
Psi4 freeware psicode.org
Python.
PySCF freeware pyscf.org Python-based.

Table 1: Quantum chemistry software packages for electronic structure calculations.

where σ(ω) = α(ω) (spin up) or β(ω) (spin down) is a spin function, and cjk are expansion
coefficients. Typically, the basis functions are Gaussian functions centered around one of
the nuclei, like
2
fℓ (r; R) = Nγℓ e−γℓ (r−Rkℓ ) (69)
for an s-type basis function centered at nucleus kℓ . In Eq. (69), Nγℓ is a normalization
factor such that ⟨fℓ |fℓ ⟩ = 1. Different methods restrict the number of Slater determinants
included in the sum in (66). A good reference book on electronic structure calculations
is [3]; a good introductory book is [4].
After having solved the electronic Schrödinger equation (63), we make the expansion
X
Ψ(x, R, t) = Φn (x; R)χn (R, t) (70)
n

for the total wave function Ψ(x, R, t) describing both the electrons and the nuclei in the
molecule. The wave function expansion (70) is called the Born-Huang expansion [5]. To
proceed, we insert the expansion (70) into the TDSE (61), and use Eq. (63). The result
is a time-dependent Schrödinger equation for the nuclear wave functions χn (R, t),
∂χn (R, t)   X
iℏ = En (R) + Tn + Vn (t) χn (R, t) − F (t) · Dnk (R)χk (R, t)
∂t k
X 
+ Ank (R) + Bnk (R) χk (R, t), (71)
k

where
Ne
X
Dnk = −e ⟨Φn (x; R)|rj |Φk (x; R)⟩ (72)
j=1

are the transition dipole moments (the diagonal elements Dnn are referred to as permanent
dipole moments), and
Nn
X ℏ2
Ank (R) = − ⟨Φn (x; R)|∇Rl Φk (x; R)⟩ · ∇Rl , (73)
l=1
M l

Nn
X ℏ2
Bnk (R) = − ⟨Φn (x; R)|∇2Rl Φk (x; R)⟩. (74)
l=1
2Ml

18
We have used the notation
 
∂ ∂ ∂
∇R = , , . (75)
∂Rx ∂Ry ∂Rz
In Eqs. (72), (73), and (74), the bra-kets ⟨·|·⟩ means integrations only over the electronic
coordinates x. The Ank (R) and Bnk (R) matrices can be written in terms of the basic
matrix element
Qlnk = ⟨Φn (x; R)|∇Rl Φk (x; R)⟩, (76)
as
Nn
X ℏ2 l
Ank (R) = − Qnk · ∇Rl , (77)
l=1
M l

and " #
Nn
X ℏ2 X
Bnk (R) = − ∇Rl · Qlnk + Qlmn · Qlmk (78)
l=1
M l m

Exercise 17. Derive Eq. (71). You have to insert (70) into the TDSE (61), and
use Eq. (63). As a second step, multiply the equation from the left side by Φk (x; R),
and use Eq. (65).

Exercise 18. Derive Eq. (78) by evaluating


P ∇Rl · Qlnk and using that the electronic
states form a complete set, that is m |Φm (x; R)⟩⟨Φm (x; R)| = 1. You also need to
show that Qlnk = −Ql∗kn by applying ∇Rl to Eq. (65).

We now make the approximation to drop the terms including Ank (R) and Bnk (R) in
Eq. (71). This is the Born-Oppenheimer approximation (BO approximation for short).
The resulting time-dependent Schrödinger equation reads
∂χn (R, t)   X
iℏ = En (R) + Tn + Vn (t) χn (R, t) − F (t) · Dnk (R)χk (R, t). (79)
∂t k

The interpretation of Eq. (79) is that each nuclear wave packet χn (R, t) moves on a
potential energy surface given by the electronic energy En (R). Transitions between the
electronic states are mediated by the transition dipole term Dnk (R). We also point out
that the term Vn (t) usually drops out after making the separation of the center-of-mass
and the internal coordinates (see the explicit example of H2 + below in Sec. 3.2).
The matrices Ank (R) and Bnk (R) represent non-adiabatic coupling terms, which are
neglected in the BO approximation. The omission of the non-adiabatic coupling terms is
motivated by (i) the large values of the nuclear masses Ml , which appear in Eqs. (73), (74),
and (ii) that the electronic states Φn (x; R) are expected to change slowly as a function of
the nuclear coordinates R, and therefore ∇Rl Φn (x; R) is small. These assumptions are
not always satisfied. For Qlnk , we can show that for n ̸= k,
⟨Φk (x; R)|(∇Rl He0 )Φk (x; R)⟩
Qlnk = ⟨Φn (x; R)|∇Rl Φk (x; R)⟩ = , (80)
Ek (R) − En (R)

19
which means that both the Ank (R) and Bnk (R) become large when Ek (R) − En (R) be-
comes small, meaning that two potential energy surfaces are close in energy.

Exercise 19. Derive Eq. (80) by using Eq. (63).

In the case where there is no external electric field, the non-adiabatic coupling terms
provide the only way of making a transition between different electronic states. This is
important in predissociation, where a transition occurs from an electronic state with a
bound well to a repulsive electronic state, leading to dissociation of the molecule. In this
case the Ank (R) is dominant.
To summarize, we have learned that molecular wave functions are calculated in two
steps: (i) The electronic Schrödinger equation (63) is solved for all values of the nuclear
coordinates R. This results in a potential energy curve En (R), which has the physical
meaning of the electronic energy + the nuclear repulsion energy. (ii) The potential energy
curve is used in the Schrödinger equation (79) to calculate the nuclear wave functions.

3.2 Explicit example: H2 +


In this section, we derive the Schrödinger equation in the BO approximation for H2 + .
H2 + contains only two protons and one electron, and is the simplest molecular system
we can imagine. Nevertheless, it is instructive to consider H2 + as an example. The total
Hamiltonian for H2 + reads
HH+2 (t) = HH0 + + VH+2 (t), (81)
2

where
2 2
ℏ2 2 X ℏ2 2 X e2 e2
HH0 + =− ∇r − ∇Rl − + , (82)
2 2me l=1
2mp l=1
4πε0 |Rl − r| 4πε0 |R1 − R2 |

and
VH+2 (t) = eF (t) · (r − R1 − R2 ) . (83)
r is the coordinate of the electron, and R1 , R2 are the coordinates of the protons. We
first change coordinates to the center-of-mass coordinate Rcom , the internuclear separation
vector R, and the electronic coordinate r′ measured from the center of mass of the protons,
mp (R1 + R2 ) + me r R1 + R2
Rcom = , R = R2 − R1 , r′ = r − . (84)
2mp + me 2
The center-of-mass coordinate Rcom describes the position of the molecule as a whole,
while the internal coordinates R and r′ describe the relative positions of the two protons
and the electron.
In the center-of-mass coordinates (84), the H2 + Hamiltonian becomes

HH+2 (t) = HH0 + (int) + VH+2 (int) (t) + HH0 + (com) + VH+2 (com) (t), (85)
2 2

where the Hamiltonian for the internal coordinates reads


ℏ2 2 ℏ2 2 X e2 e2
HH0 + (int) = − ∇r′ − ∇R − ′ + aR/2|
+ , (86)
2 2µe 2µp a=−1,1
4πε0 |r 4πε0 |R|

20
with the reduced masses
2mp me mp
µe = , µp = , (87)
2mp + me 2
VH+2 (int) (t) = νeF (t) · r′ , (88)
ν = (2me + 2mp )/(2mp + me ), and the center-of-mass Hamiltonian is

ℏ2
HH0 + (com) = − ∇2 , (89)
2 2(2mp + me ) Rcom

VH+2 (com) (t) = −eF (t) · Rcom , (90)

Exercise 20. Derive Eq. (85). You first have to express r, R1 and R2 in terms of

r′ , R and Rcom , and then use the chain rule. For example, ∂r∂x = ∂rx ∂
∂rx ∂rx′
+ ∂R x ∂
∂rx ∂Rx
+
∂Rcomx ∂
∂rx ∂Rcomx
etc.

We can see in Eq. (85) that the Hamiltonian separates into two terms, one describing
the center-of-mass motion, and another describing the internal coordinates. This means
that we can write the total wave function as a product,

Ψ(r, R1 , R2 , t) = Ψint (r′ , R, t)Ψcom (Rcom , t). (91)

Exercise h 21. Confirm


i that the product wave function (91) solves the
TDSE HH+2 (t) − iℏ∂/∂t Ψ(r, R1 , R2 , t) = 0 if Ψint (r′ , R, t) and Ψcom (Rcom , t)
h i
independently solve HH+ (int) + VH2 (int) (t) − iℏ∂/∂t Ψint (r′ , R, t)
0
+ = 0 and
h 2 i
HH0 + (com) + VH+2 (com) (t) − iℏ∂/∂t Ψcom (Rcom , t) = 0.
2

In the following, we will consider the internal Hamiltonian given by Eqs. (86) and (88).
To make the situation even simpler, let’s consider the case when the rotation of H2 + can
be ignored. For example, if we are considering ionization or dissociation processes which
are much faster than the rotational period, we may to a good approximation take the
molecular axis to be frozen in space.

Exercise 22. Derive the time scale for the rotational motion of H2 + by using
Eq. (29). The energy difference can be estimated by considering the ground (J = 0)
and the first excited (J = 1) rotational state. The rotational energy levels are given
by EJ = hcBJ(J + 1), where the rotational constant of H2 + is B = 30 cm−1 (see
NIST chemistry webbook webbook.nist.gov/chemistry).

21
If we take the molecular axis and the electric field to be aligned along the z-axis, the
Hamiltonian in the center-of-mass frame becomes

ℏ2 2 ℏ2 1 ∂
 
2 ∂
HH+2 (t) = − ∇′− R
2µe r 2µp R2 ∂R ∂R
X e2 e2
− ′
+ + νeF (t)z ′ , (92)
a=−1,1
4πε0 |r + aez R/2| 4πε0 R

where R is the internuclear distance, and F (t) is the electric field component along the z
axis.
If we apply the BO approximation, the Schrödinger equation for the nuclear wave
functions becomes
ℏ2 1 ∂
  
∂χn (R, t) 2 ∂
iℏ = En (R) − R χn (R, t)
∂t 2µp R2 ∂R ∂R
X
− F (t) Dnk (R)χn (R, t), (93)
k

where

Dnk (R) = −e⟨Φn (r′ ; R)|z ′ |Φk (r′ ; R)⟩ (94)


is the matrix containing the transition dipole matrix elements. For a homonuclear (both
atoms are the same) diatomic molecule, the diagonal elements Dnn (R) of the transition
dipole matrix are always 0, Dnn (R) = 0.

Exercise 23. Discuss the reason why Dnn (R) = 0 for homonuclear diatomic
molecules.

The potential energy curves En (R) for the ground (n = 1sσg ) and first excited state
(n = 2pσu ) in H2 + , as well as the transition dipole moment D1sσg 2pσu (R) are shown in
Fig. 4. The potential energy curves were calculated by the author of these lecture notes,
by discretizing the electronic Hamiltonian on a ρ–z grid (cylindrical coordinates) using
the finite difference method (see Sec. 5.2 on page 41).

22
0 1s g (a)
energy (eV)

2p u

-10

-20
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
internuclear distance (Å)
transition dipole

3
moment (D)

(b)

1
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
internuclear distance (Å)
Figure 4: (a) Potential energy curves of the ground state and first excited state of H2 + .
(b) Transition dipole moment along the molecular axis, given in units of Debye. We have
1 D ≈ 3.336 × 10−30 C m ≈ 0.2082 eÅ.

23
4 Light-molecule interaction
In this section, we study an explicit example of light-matter interaction: H2 + in a laser
field. H2 + is the simplest possible molecule, but, as we shall see, still exhibits rich
dynamics when exposed to an intense laser pulse.
The Schrödinger equation for the nuclear wave functions in the BO approximation
reads
ℏ2 1 ∂
  
∂χn (R, t) 2 ∂
iℏ = En (R) − R χn (R, t)
∂t 2µp R2 ∂R ∂R
1
X
− F (t) Dnk (R)χn (R, t), (95)
k=0

which is the same as Eq. (93). R is the internuclear distance. We use the labeling n = 0
for the ground state (1sσg ), and n = 1 for the first excited state (2pσu ). Only two
electronic states are included, and the non-adiabatic coupling terms Ank (R) and Bnk (R)
[see Eqs. (73) and (74)] are neglected. In Eq. (95), En (R) are the potential energy curves,
Dnk (R) is the R-dependent transition dipole moment, and F (t) is the laser field. Since
H2 + is a homonuclear molecule, the permanent dipole moments of the ground and excited
states vanish, that is, D00 (R) = D11 (R) = 0. The potential energy curves E0 (R), E1 (R)
and the transition dipole moment D01 (R) = D10 (R) are shown in Fig. 4.
The form of the kinetic energy operator in Eq. (95) can be simplified if we make the
substitution
ζn (R, t)
χn (R, t) = . (96)
R
The Schrödinger equation for the modified nuclear wave functions ζn (R, t) becomes
1
ℏ2 ∂ 2
 
∂ζn (R, t) X
iℏ = − + En (R) ζn (R, t) − F (t) Dnk (R)ζn (R, t). (97)
∂t 2µp ∂R2 k=0

Note that the boundary condition for ζn (R, t) is

ζn (R = 0, t) = 0, (98)

and the normalization condition is


1 Z
X ∞
dR|ζn (R, t)|2 = 1. (99)
n=0 0

Exercise 24. Derive Eq. (97), and explain the reason for the boundary condition
(98).

It is instructive to rewrite the Schrödinger equation (97) using matrix notation. To


do this, we introduce the wave function array
 
ζ0 (R, t)
ζ(R, t) = . (100)
ζ1 (R, t)

24
In terms of ζ(R, t), the Schrödinger equation becomes

∂ζ(R, t)
iℏ = [H + V(t)] ζ(R, t), (101)
∂t
where the Hamiltonian is a 2 × 2 matrix,
 
h0 (R) 0
H= , (102)
0 h1 (R)

with
ℏ2 ∂ 2
hn (R) = − + En (R) (n = 0, 1), (103)
2µp ∂R2
and  
0 −F (t)D01 (R)
V(t) = . (104)
−F (t)D01 (R) 0
From the Schrödinger equation (101) with the time-independent Hamiltonian (102) and
the time-dependent coupling term (104), it is clear that the Hamiltonian (102) represents
the independent motion of the wave packets on the potential energy curves E0 (R) and
E1 (R), and V(t) defined in Eq. (104) represents the coupling between the two curves. In
absence of non-adiabatic coupling terms, the laser-induced coupling by V(t) is the only
way of making a transition between the two curves E0 (R) and E1 (R).
The time-independent form of Eq. (101) (putting the interaction term to zero, V(t) =
0) is
Eζ(R) = Hζ(R), (105)
which has solutions of the form
   
ξ0,v (R) 0
ζ(R) = , ζ(R) = , (106)
0 ξ1,E (R)

where ξ0,v (R) and ξ1,E (R) are the eigenstates of the respective potential energy curve,

E0,v ξ0,v (R) = h0 (R)ξ0,v (R) (107)

and
E1 ξ1,E1 (R) = h1 (R)ξ1,E1 (R). (108)
For the ground state potential energy curve E0 (R), there is a bound well (see Fig. 4),
and there exist a finite number of bound eigenstates with localized wave functions, which
can be labeled by a discrete vibrational quantum number v. Above the dissociation
limit at −13.6 eV, there are also continuum states which represent dissociation. For the
excited state potential energy curve E1 (R), there is no bound well (see Fig. 4), and there
exist only continuum wave functions which oscillate even at large R. A few vibrational
eigenfunctions in H2 + are shown in Fig. 5.

4.1 Weak-field interaction


We first study the case of excitation by a weak field. This means that the coupling term
F (t)D01 (R) in Eq. (97) is small, in the sense that the change of the wave function induced
by the coupling term is small. If we start in the electronic ground state, ζ0 (R, t = 0) =
f (R), and ζ1 (R, t = 0) = 0, we require that after the interaction with the pulse, we have

25
2 (a)
Energy (eV)

E = 0.12 eV
0
v=4
-2
v=0
-4
1 2 3 4 5 6 7 8
R (Å)

3
(b)
2
Energy (eV)

1 E = 0.63 eV

0 E = 0.15 eV

-1
1 2 3 4 5 6 7 8
R (Å)

Figure 5: (a) Potential energy curve and three vibrational eigenfunctions (two bound
and one continuum) in the electronic ground state of H2 + . (b) Potential energy curve
and two continuum vibrational eigenfunctions in the electronically excited state of H2 + .
The potential energy curves are shifted so that the dissociation limit (corresponding to
H + H+ ) has energy E = 0. In both (a) and (b), we plot the eigenfunctions shifted by
√ √
their eigenenergy, that is, we plot ξ0,v (R) a0 Eh + Ev in (a), and ξ1,E (R) a0 Eh + E in
(b). In the case of the continuum wave functions, the value of the energy is indicated

next to each curve. The scaling factor a0 Eh for the wave function is written in terms of
the Bohr radius a0 ≈ 0.53 Å and the Hartree energy Eh ≈ 27.21 eV (see the explanation
of atomic units in Sec. 5.1 on page 41).

26
⟨ζ1 |ζ1 ⟩ ≪ 1. In this case, F (t)D01 (R) can be treated as a perturbation, and we can use
first-order perturbation theory to calculate the time-dependent wave function.
In first-order perturbation theory, we start with the zeroth-order wave function, which
is known. We assume that we start in the vibrational ground state on the electronic
ground state potential energy curve, that is,
 
(0) ξ0,v=0 (R)
ζ (R, t = 0) = . (109)
0

The time-dependent solution to the unperturbed TDSE

∂ζ (0) (R, t)
iℏ = Hζ (0) (R, t), (110)
∂t
[Eq. (110) is Eq. (101) without V(t)] is
 
(0) −iE0,0 t/ℏ ξ0,v=0 (R)
ζ (R, t) = e , (111)
0

where E0,0 is the eigenenergy of the vibrational ground state on the electronic ground
state. The superscript “(0)” on the wave function ζ (0) (R, t) in Eqs. (109)–(111) indicates
the zeroth order (unperturbed). To proceed, we make the wave function ansatz

ζ(R, t) = ζ (0) (R, t) + ζ (1) (R, t), (112)

where the first-order correction term ζ (1) (R, t) is small. Then, we insert the ansatz (112)
into the TDSE (101), where the coupling term V(t) is assumed to be small. The result is

(0) ∂ζ (1) (R, t)


E0,0 ζ (R, t) + iℏ = Hζ (0) (R, t) + V(t)ζ (0) (R, t) + Hζ (1) (R, t) + V(t)ζ (1) (R, t).
∂t
(113)
(1) (0)
Ignoring the product of two small terms V(t)ζ (R, t) and recalling that ζ (R, t) is an
eigenstate of the Hamiltonian H, we end up with an equation for the correction ζ (1) (R, t),

∂ζ (1) (R, t)
iℏ = V(t)ζ (0) (R, t) + Hζ (1) (R, t). (114)
∂t
The initial condition for ζ (1) (R, t) is

ζ (1) (R, t = 0) = 0, (115)

since the solution without the perturbation should be given by the zeroth-order wave
function ζ (0) (R, t).
The solution of Eq. (114) is

1 t −i(t−t′ )H/ℏ ′ (0)


Z  
(1) ′ ′ 0
ζ (R, t) = e V(t )ζ (R, t )dt = (1) , (116)
iℏ 0 ζ1 (R, t)

where Z t
(1) 1 ′ ′
ζ1 (R, t) =− e−i(t−t )h1 (R)/ℏ F (t′ )D01 (R)ξ0,v=0 (R)e−it E0,0 /ℏ dt′ . (117)
iℏ 0

27
Exercise 25. Check that the expression (116) with (117) solves Eq. (114).

In first-order perturbation theory, the total wave function is therefore calculated to


be
ξ0,v=0 (R)e−itE0,0 /ℏ
 
(0) (1)
ζ(R, t) = ζ (R, t) + ζ (R, t) = (1) . (118)
ζ1 (R, t)
This means that in first order, there is no change in the wave function in the electronic
ground state.
We can make a physical interpretation of Eq. (117) by remembering the general so-
lution to the TDSE with a time-dependent Hamiltonian written in the form shown in
Eq. (30) (see Sec. 2.2 on page 10). The interpretation of Eq. (117) is that (i) the initial
wave function is propagated from t = 0 to t = t′ on the ground state potential energy

curve (the factor ξ0,v=0 (R)e−it E0 /ℏ ), (ii) at t = t′ a transition to the excited state is
induced by the laser field [the factor F (t′ )D01 (R)], and then (iii) the wave function is

propagated on the excited state potential energy curve (the term e−i(t−t )h1 (R)/ℏ ). (iv)
t
All contributions from different t′ are summed up (the integral 0 dt′ ) to form the wave
R

function in the excited state.


To simplify the expression (117) a bit further, we expand the function D01 (R)ξ0,v=0 (R)
in terms of the vibrational eigenstates ξ1,E (R) of the excited state potential energy curve
[see Eq. (108)], Z
D01 (R)ξ0,v=0 (R) = dEcE ξ1,E (R), (119)

where the coefficients cE are given by

cE = ⟨ξ1,E |D01 |ξ0,v=0 ⟩. (120)

Often it is a good approximation to take the transition dipole moment to be a constant,


D01 (R) ≈ D01 (Req ), since the ground state wave function ξ0,v=0 (R) is non-zero only in
a narrow range of R around the equilibrium internuclear distance Req . In this case, we
have
cE = D01 (Req )⟨ξ1,E |ξ0,v=0 ⟩. (121)
Approximating (120) as (121) is called the Franck-Condon approximation. The squared
overlap of the initial wave function and the vibrational eigenstates on the excited state
|⟨ξ1,E |ξ0,v=0 ⟩|2 is called a Franck-Condon factor. Equation (117) becomes
Z
(1) 1
ζ1 (R, t) = − dEcE e−itE/ℏ fE (t)ξ1,E (R), (122)
iℏ
where Z t

fE (t) = eit (E−E0,0 )/ℏ F (t′ )dt′ . (123)
0
If t becomes large, the expression (123) for the coefficent fE (t) looks like a Fourier trans-
form of the laser field F (t) evaluated at the frequency (E − E0,0 )/ℏ. Therefore, for long
pulses (quasi-continuous wave), F (t) = F0 sin(ωt), only energies which are approximately
resonant [(E − E0,0 )/ℏ ≈ ω] with the laser field frequency make a substantial contribu-
tion to the excited state wave packet. For short pulses the whole bandwidth of the pulse
contributes.

28
The total excitation probability pexcit (t) for excitation to the excited electronic state
is Z
(1) (1) 1
pexcit (t) = ⟨ζ1 (t)|ζ1 (t)⟩ = 2 dE|cE |2 |fE (t)|2 . (124)

First-order perturbation theory is valid if

pexcit (t) ≪ 1. (125)

There are two cases when Eq. (125) holds. The first is when the coupling F (t)D01 is
small, so that cE fE (t) in Eq. (122) is small. The other case is for short times so that the
fE (t) in (123) is small because of the limited range of integration. If t is small enough,
perturbation theory can still be used even if the coupling with the laser field is not in the
perturbative regime.

4.2 Comparison of perturbation theory and numerically exact


results
In this section, we show a few examples of comparisons between the results of first-order
perturbation theory, and the results of “numerically exact” calculations. The “numeri-
cally exact” results were obtained by the finite-difference method as described in Sec. 5.2
on page 41. They are not exact in the sense that we can write down a formula for the
results, but the accuracy of the plotted values are better than about 0.1%, and the ac-
curacy can be improved by changing the numerical parameters such as the mesh width
and the time step length, as discussed in Sec. 5.
We solve the TDSE (101) for H2 + , using an electric field on the form

F0 sin2 πt
 
τ
sin(ωt) when 0 ≤ t ≤ τ,
F (t) = (126)
0 otherwise,

where F0 is the peak field strength, and ω is the angular frequency [related to the wave-
length λ of the laser light as λ = c/(2πω)]. The total duration τ of the pulse is defined
in terms of the total number of optical cycles N as
2πN
τ= . (127)
ω
The electric field (126) is a model of a laser pulse, and is convenient theoretically since
the sin2 πt τ
pulse shape guarantees that the electric field vanishes identically for t < 0
and t > τ . A more realistic pulse shape would be a Gaussian function.
In Fig. 6, we show the calculation of the excitation probability pexcit (t) [defined in
Eq. (124)] of H2 + initially in the electronic and vibrational ground state to the electron-
ically excited state. We use a laser pulse with ω = 4.7 × 1015 s−1 , corresponding to a
wavelength of λ = c/(2πω) = 400 nm or a photon energy of ℏω = 3.1 eV. The numeri-
cally exact results are compared with first-order perturbation theory using/not using the
Franck-Condon approximation.
The results shown in Fig. 6 are an example of non-resonant laser-molecule interaction,
that is, when the photon energy ℏω is much smaller than the energy gap between the
states. Here we have ℏω = 3.1 eV which is smaller than the energy gap of the two
potential energy curves at the equilibrium internuclear distance Req ≈ 1.06 Å. We have
E1 (Req ) − E0 (Req ) ≈ 11.8 eV (see Fig. 4). We see in Fig. 6 (a) that when the field is weak

29
-3
10
4 2
(a)
Excitation probability
Exact
3 Pert. theory (FC) 1

E-field (V/Å)
Pert. theory (non-FC)
Electric field
2 0

1 -1

0 -2
0 1 2 3 4 5 6
Time (fs)

0.1 6
Excitation probability

Exact
0.08 (b) 4
Pert. theory (FC)

E-field (V/Å)
Pert. theory (non-FC) 2
0.06 Electric field
0
0.04
-2
0.02 -4
0 -6
0 1 2 3 4 5 6
Time (fs)

Figure 6: The excitation probability to the excited state pexcit (t) [defined in Eq. (124)],
for a peak electric field of (a) F0 = 1 V/Å and (b) F0 = 5 V/Å. The laser wavelength
is λ = 400 nm. We compare the numerically exact results with first-order perturbation
theory using the Franck-Condon (FC) approximation [see Eq. (121)] and without the
Franck-Condon approximation (non-FC) [see Eq. (120)]. In panel (a), the exact blue
curve and the non-FC red curve overlap almost completely. The time dependence of the
electric field is shown as a thick, gray curve with the numerical values indicated on the
right vertical axis.

enough, first-order perturbation theory works well. Non-resonant interaction at weak


fields results in a transient excitation of the excited state, but after the pulse has vanished,
all of the population comes back to the ground state. We see in Fig. 6 (b) that when the
peak field is F0 = 5 V/Å (= 5 × 1010 V/m), first-order perturbation theory is not a good
approximation anymore. In this case, there is a certain amount of population transferred
to the excited state after the pulse has vanished. Thus, even for non-resonant excitation,
electronic excitation is possible by multi-photon excitation. Analytically, multi-photon
excitation can be described by higher-order perturbation theory, but the formulas tend
to become rather complicated.
In Fig. 7, we show an example of resonant interaction, where the photon energy ℏω
is equal to the energy gap between the two states, ℏω = E1 (Req ) − E0 (Req ) ≈ 11.8 eV,
which corresponds to wavelength λ ≈ 105 nm (in the vacuum ultraviolet [VUV] range).
In this case, we have a continuous buildup of the population in the excited state even
for weak fields [see Fig. 7(a)]. For strong fields [see Fig. 7(b)], perturbation theory fails
completely and predicts an excitation probability larger than one. This is not surprising,
since perturbation theory is not expected to work well if the excitation probability is not

30
0.1 2
Excitation probability

Exact
0.08 Pert. theory (FC) 1

E-field (V/Å)
Pert. theory (non-FC)
0.06 Electric field
0
0.04
-1
0.02 (a)

0 -2
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Time (fs)

1 6
Excitation probability

Exact
0.8 4
Pert. theory (FC)

E-field (V/Å)
Pert. theory (non-FC) 2
0.6 Electric field
0
0.4
-2
0.2 (b) -4
0 -6
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Time (fs)

Figure 7: The excitation probability to the excited state pexcit (t) [defined in Eq. (124)],
for a peak electric field of (a) F0 = 1 V/Å and (b) F0 = 5 V/Å. The laser wavelength
is λ = 105 nm. We compare the numerically exact results with first-order perturbation
theory using the Franck-Condon (FC) approximation [see Eq. (121)] and without the
Franck-Condon approximation (non-FC) [see Eq. (120)]. The time dependence of the
electric field is shown as a thick, gray curve with the numerical values indicated on the
right vertical axis.

31
100
Excitation probability = 105 nm

10-5

= 400 nm
10-10
= 799 nm

10-2 10-1 100 101


F0 (V/Å)

Figure 8: The excitation probability to the excited state pexcit (t = ∞) [defined in


Eq. (124)] after the interaction with a laser pulse with N = 5 optical cycles and varying
peak field strength F0 . The results for three different wavelengths are shown. The numer-
ically exact results are shown with solid lines. The predictions of first-order perturbation
theory, pexcit (t = ∞) ∝ F02 , are shown with broken lines. Note that both the vertical and
the horizontal axes are shown on a logarithmic scale.

much smaller than one.


One of the predictions of perturbation theory is that the excitation probability in-
creases quadratically with the peak field strength F0 , pexcit (t) ∝ F02 . This is because the
excitation probability pexcit (t) [see Eq. (124)] is proportional to |fE |2 , and fE is propor-
tional to F0 , see Eq. (123). The intensity (and energy) of a laser field is also proportional
to F02 , so pexcit (t) is proportional to the pulse intensity. In Fig. 8, we show the comparison
of first-order perturbation theory and the numerically exact calculation. We show the
excitation probability pexcit (t = ∞) after the interaction with the pulse (the probability
of exciting the molecule to the first excited state) as a function of the peak field strength
F0 of the laser pulse. Results for three wavelengths are included, non-resonant (800 nm
and 400 nm), and resonant (105 nm). We can see in Fig. 8 that perturbation theory
works well up to about F0 = 0.1 V/Å, but that it fails for larger field strengths. For
very strong fields F0 ∼ 10 V/Å, the probability oscillates and does not have an obvious
structure. We should mention here that for very large fields, there will be substantial
ionization H+ + −
2 → 2H + e , which is not included in our model.
If there is substantial excitation to the excited state after the interaction with the
laser pulse, the molecule will dissociate, since the excited state potential energy curve is
repulsive [there are no bound states, see Fig. 4(a)]. We illustrate this in Fig. 9, where
we show |ζ0 (R, t)|2 and |ζ1 (R, t)|2 , which represent the time-dependent probability dis-
tributions of the internuclear distance R. The total probability is conserved, that is, we
have
X 1 Z ∞
dR|ζk (R, t)|2 = 1. (128)
k=0 0

32
Probability density (Å-1 )

t = 0 fs
2

0
0 1 2 3 4 5 6 7 8
R (Å)
Probability density (Å-1 )

1.5

1 t = 6.6 fs

0.5

0
0 1 2 3 4 5 6 7 8
R (Å)
Probability density (Å-1 )

t = 25 fs
1

0
0 1 2 3 4 5 6 7 8
R (Å)

Figure 9: Upper two panels: probability distributions |ζ0 (R, t)|2 and |ζ1 (R, t)|2 of the
internuclear distance R in the ground and excited states. The color scale indicates the
value of |ζk (R, t)|2 in units of Å−1 . The laser field has F0 = 10.3 V/Å, and λ = 400 nm.
The laser field (not to scale) is shown in purple color. Lower three plots: Cuts along
the white dashed lines shown in the upper two plots. Solid line: |ζ0 (R, t)|2 ; dotted line:
|ζ1 (R, t)|2 .
33
The dissociation in the excited state is seen clearly in the second panel in Fig. 9. Also the
dynamics in the ground state is interesting (top panel in Fig. 9). After the interaction
with the laser pulse, the wave function in the electronic ground state ζ0 (R, t) is not the
vibrational ground state ξ0,v (R), but because of the vibrational excitation (by a Raman-
type process: first excitation to the electronically excited state, and then dexcitation to
the electronic ground state; see further discussion in Sec. 4.4), we have a wave packet,
X
ζ0 (R, t = τ ) = cv ξ0,v (R), (129)
v

where cv = ⟨ξ0,v |ζ0 (t = τ )⟩ are constant expansion coefficients. This wave packet evolves
in time, as we have learned in Sec. 2.2,
X
ζ0 (R, t > τ ) = e−iE0,v (t−τ )/ℏ cv ξ0,v (R). (130)
v

After the evolution on the ground state potential energy curve, beautiful interference
structures develop, as can be seen in the bottom plot in Fig. 9.
We end this section with brief discussion on the strength of the electric fields of a
laser pulse. How strong is a peak field of the order of 1 V/Å as used in Fig. 6? In the lab,
about the strongest macroscopic electric field that can be produced is of the order of 106
V/m = 10−4 V/Å (made by applying a voltage of 10 kV between two plates separated
by 1 cm, see [6]). Another reference value is the electric field felt by an electron in the 1s
state in the hydrogen atom, F1s = e/(4πε0 a20 ) ≈ 5 × 1011 V/m = 50 V/Å. Laser pulses in
the lab can be easily be produced which have F0 ∼ F1s , which means that the force felt
by the valence electrons from the laser field is as large as the attractive Coulomb force
from the atomic nuclei. In fact, at large scale facilities such as the ELI facility in the
Czech Republic (see eli-beams.eu), laser pulses with peak electric field strengths of up to
1014 V/m = 104 V/Å can be produced! Note though that a laser field is an oscillating
electric field, not a static one.

4.3 Adiabatic (field-following) state approach


In this section, we will introduce a very intuitive approach to understand laser-molecule
interaction, the adiabatic state approach, originally introduced by Kono and coworkers
[7]. In this approach, the potential energy surfaces are changed into time-dependent
potential energy surfaces, which change in time due to the coupling with the laser field.
The nuclear wave packets move on the time-dependent potential energy surfaces. The
approach is also called the field-following state approach, for reasons that will be apparent
in a moment. The adiabatic state approach works well if the laser field is slowly varying.
This means that the angular frequency ω of a laser field F (t) = F0 sin(ωt) should be
much smaller than the typical energy separation (divided by ℏ) between two electronic
states,
ℏω ≪ E1 (Req ) − E0 (Req ). (131)
In the case of H2 + , we have Req ≈ 1.06 Å, and E1 (Req ) − E0 (Req ) ≈ 11.8 eV (see Fig. 4).
This means that the adiabatic criterion (131) is well satisfied for the typical laser wave-
length λ = 800 nm, which corresponds to a photon energy ℏω = hc/λ ≈ 1.55 eV.
The starting point of the the adiabatic state approach is to consider the eigenstates of
the potential part of the Hamiltonian H [see Eq. (102)] without the kinetic energy term,

34
but including the interaction with the laser field. We solve the equation
(ad)
A(R, t)Qk (R, t) = Ek (R, t)Qk (R, t), (132)

where
ℏ2 ∂ 2
 
E0 (R) −F (t)D01 (R)
A(R, t) = H + 1 + V(t) = , (133)
2µp ∂R2 −F (t)D01 (R) E1 (R)

and 1 = 10 01 is the identity matrix.



(ad)
Equation (132) is solved at each R and t, so both the eigenvalues Ek (R, t) and the
eigenvectors Qk (R, t) depend on R and t. Note that in Eq. (132), both the time t and
the internuclear distance R are treated as parameters, not dynamical variables.
Because we have only two states, the solution to Eq. (132) can be calculated analyti-
cally as
r
(ad) E0 (R) + E1 (R) [E1 (R) − E0 (R)]2
E0 (R, t) = − + [F (t)D01 (R)]2 , (134)
2 4
r
(ad) E 0 (R) + E 1 (R) [E1 (R) − E0 (R)]2
E1 (R, t) = + + [F (t)D01 (R)]2 , (135)
2 4
with the corresponding eigenvectors
 
cos Θ(R, t)
Q0 (R, t) = , (136)
sin Θ(R, t)
 
sin Θ(R, t)
Q1 (R, t) = , (137)
− cos Θ(R, t)
which are written using the R- and t-dependent angle Θ(R, t), defined by
r
E0 (R) − E1 (R) 1 [E1 (R) − E0 (R)]2
tan Θ(R, t) = + + [F (t)D01 (R)]2 . (138)
2F (t)D01 (R) F (t)D01 (R) 4

Exercise 26. Derive the eigenvalues (134), (135) and eigenvectors (136), (137) by
diagonalizing the matrix A(R, t) defined in Eq. (132).

We proceed by changing the electronic basis for the nuclear wave packets:
(ad) (ad)
ζ(R, t) = ζ0 (R, t)Q0 (R, t) + ζ1 (R, t)Q1 (R, t). (139)

Equation (139) can also be written like

ζ(R, t) = Q(R, t)ζ (ad) (R, t), (140)

where !
(ad)
(ad) ζ0 (R, t)
ζ (R, t) = (ad) , (141)
ζ1 (R, t)

35
and  
 cos Θ(R, t) sin Θ(R, t)
Q(R, t) = Q0 (R, t) Q1 (R, t) = . (142)
sin Θ(R, t) − cos Θ(R, t)
We note that Q(R, t) is a unitary matrix, which means that at every R and t we have
QT (R, t)Q(R, t) = Q(R, t)QT (R, t) = 1, (143)
where QT denotes the transpose of the matrix Q (in fact QT = Q here). The fact that
Q(R, t) is a unitary matrix also means that the total probability density ρ(R, t), defined
as
1
X
ρ(R, t) = |ζk (R, t)|2 , (144)
k=0
can be written in terms of the transformed wave functions simply as
1
X (ad)
ρ(R, t) = |ζk (R, t)|2 . (145)
k=0

Equation (145) can be derived by noting that the expression (144) for the total density
can be written in matrix form as
ρ(R, t) = ζ(R, t)† ζ(R, t), (146)
and then using Eqs. (140) and (143).
The next step is to insert Eq. (140) into the TDSE [the equation below is the same
as Eq. (101)]
∂ζ(R, t)
iℏ = [H + V(t)] ζ(R, t), (147)
∂t
and derive an equation for the adiabatic wave packet ζ (ad) (R, t). We obtain
∂ζ (ad) (R, t) −ℏ2 ∂ 2 (ad)
iℏQ(R, t) = Q(R, t) ζ (R, t)
∂t 2µp ∂R2
!
(ad)
E0 (R, t) 0
+ Q(R, t) (ad) ζ (ad) (R, t)
0 E1 (R, t)
ℏ2 ∂ 2 Q(R, t) (ad) ℏ2 ∂Q(R, t) ∂ζ (ad) (R, t)
− ζ (R, t) −
2µp ∂R2 µp ∂R ∂R
∂Q(R, t) (ad)
− iℏ ζ (R, t). (148)
∂t
We now make the approximation of neglecting the terms involving ∂Q(R, t)/∂R and
∂Q(R, t)/∂t on the third and fourth lines in Eq. (148). The motivation is that the laser
field changes slowly so that ∂Q(R, t)/∂t is small, and that the R-behavior of Q(R, t)
is smooth, so that the terms involving ∂Q(R, t)/∂R become small because of the small
prefactor 1/µp . The reasoning here is similar to the discussion in Sec. 3 where we made the
Born-Oppenheimer approximation by ignoring the Ank (R) and Bnk (R) terms in Eq. (71)
on page 18.
Then, by multiplying Eq. (148) from the left by QT , we arrive at the adiabatic TDSE
for the adiabatic wave packet ζ (ad) (R, t)
" !#
(ad)
∂ζ (ad) (R, t) −ℏ2 ∂ 2 E0 (R, t) 0
iℏ = 2
+ (ad) ζ (ad) (R, t). (149)
∂t 2µp ∂R 0 E1 (R, t)

36
5

-5 F = 3.6 V/Å
Energy (eV)

-10 F = 1 V/Å
F = 0 V/Å
-15

-20

-25
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6
R (Å)

(ad)
Figure 10: Adiabatic (field-following) potential energy curves E0 (R, t) (blue color) and
(ad)
E1 (R, t) (green color) of H2 + , defined in Eqs. (134) and (135). The adiabatic potential
energy curves are shown in the field-free case (solid curves), and for two different electric
field strengths (dotted and dash-dotted curves).

The Eq. (149) is a Schrödinger equation for a wave packet moving on the time-
(ad) (ad)
dependent potential energy curves E0 (R, t), E1 (R, t). In the adiabatic approximation,
(ad) (ad)
there is no coupling between the two curves E0 (R, t) and E1 (R, t), so a wave packet
(ad)
starting on E0 (R, t) [which in absence of the laser field becomes the lowest electronic
(ad)
state E0 (R)], the wave packet will stay on E0 (R, t). In other words, we can write
Eq. (149) as two separate equations:
(ad)  2 2 
∂ζk (R, t) −ℏ ∂ (ad) (ad)
iℏ = 2
+ Ek (R, t) ζk (R, t), (150)
∂t 2µp ∂R

which is valid separately for k = 0 and 1. To recover the wave packets ζ(R, t) on the
field-free, time-independent potential energy curves, we apply Eq. (140).
We show examples of the adiabatic field-following potential energy curves in Fig. 10.

Exercise 27. Derive Eq. (148) by inserting Eq. (140) into Eq. (147). Check that
we can obtain Eq. (149).

4.4 Comparison of the adiabatic state approach and numeri-


cally exact results
The adiabatic state approach is expected to be able to describe vibrational excitation of
H2 + for slowly varying long-wavelength laser fields. If the electric field of the laser pulse

37
Probability density (Å -1)

4
t = 0 fs
2

0
0 1 2 3 4 5 6 7 8
R (Å)
Probability density (Å -1)

1
t = 34 fs
0.5

0
0 1 2 3 4 5 6 7 8
R (Å)
Probability density (Å -1)

2
t = 50 fs
1

0
0 1 2 3 4 5 6 7 8
R (Å)

Figure 11: Upper two panels: total probability distributions ρ(R, t) = |ζ0 (R, t)|2 +
|ζ1 (R, t)|2 of the internuclear distance R. The results of the exact 2-state model (first
panel) and the results of the adiabatic state approach (second panel) are shown. The
color scale indicates the value of |ζk (R, t)|2 in units of Å−1 . The laser field has F0 = 5.1
V/Å, and λ = 2000 nm. The laser field (not to scale) is shown in purple color. Lower
three plots: cuts along the dashed lines shown in the upper two plots. Solid line: exact
2-state; dotted line: adiabatic state approach.
38
changes slowly, H2 + will be vibrationally excited by the following mechanism: (i) part of
the wave packet is transferred to the excited state, (ii) the wave packet starts to spread
out and moves towards larger values of R, since the excited state potential energy curve
is repulsive [see Fig. 4(a)], (iii) because the laser field has not yet vanished, part of the
wave packet on the excited state can be deexcited to the ground state, which creates a
vibrationally excited wave packet on the ground state potential energy curve.
These steps requires the laser pulse to be long enough for the molecule to stretch
during the laser-molecule interaction. As we learned in Sec. 2.2, the typical time scale T
for vibration is given as
2πℏ
T = , (151)
E0,v=1 − E0,v=0
where E0,v=0 and E0,v=1 are the energies of the vibrational ground state and the first
excited vibrational state, respectively, in the electronic ground state. In the case of H2 + ,
we have E0,v=1 − E0,v=0 = 0.27 eV (≈ 2200 cm−1 ), which corresponds to T = 15 × 10−15
s = 15 fs. The same time scale (up to a factor of π) is obtained by considering the time
scale for the spreading out of the wave packet on the excited state potential energy curve
[see the discussion below Eq. (53) on page 14].
In the adiabatic state approach, the vibrational excitation is understood by the time-
dependence of the adiabatic potential energy curve, and not by the transition to the elec-
tronically excited state. The effect of the excited state is included in the time-dependence
of the potential energy curve.
In Fig. 11, we show the comparison between the full 2-state model [defined by the
TDSE (147)] and the adiabatic state approach [defined by Eq. (150) with k = 0.] We
take a laser field with N = 8 cycles and λ = 2000 nm, which implies a laser cycle
duration of λ/c = 2π/ω = 6.7 fs and a pulse duration of τ = 33.4 fs. We show the total
probability density ρ(R, t) for the internuclear distance [see the definition in Eq. (144)].
We can see in Fig. 11 that the adiabatic state approach describes the broadening and
vibrational excitation of the wave packet rather well. The discrepancy between the two
approaches can be ascribed to non-adiabatic transitions to the excited adiabatic state
(ad)
potential energy curve E1 (R, t).

39
5 Simulation methods
In this section we will discuss a little bit on how the time-dependent Schrödinger equa-
tion (TDSE) can be solved numerically. For simplicity, let’s consider the TDSE in one
dimension,
∂Ψ(x, t) ℏ2 ∂ 2 Ψ(x, t)
iℏ =− + V (x, t)Ψ(x, t). (152)
∂t 2m ∂x2
The TDSE (152) is a partial differential equation in two continuous variables x and t. As
we have discussed earlier, Eq. (152) cannot be solved analytically except in a very few
cases. However, we can obtain very good approximate solutions by using a computer.

5.1 Dimensionless form of the Schrödinger equation


The first step is to obtain a dimensionless form of Eq. (152), where the time t and
coordinate x appear without a physical dimension (or unit, such as second, or ångström).
In a computer program, we can only represent dimensionless numbers, and not physical
quantities having different units. To scale Eq. (152), we introduce the dimensionless time
t′ and the dimensionless coordinate x′ as

t′ = At, x′ = Bx, (153)

where A has the dimension of inverse time (with units of, for example s−1 ), and B has
the dimension of inverse length (for example, in units of m−1 ). In terms of the new
coordinates, the TDSE (152) becomes

∂Ψ′ (x′ ,′ t) B 2 ℏ 1 ∂ 2 Ψ′ (x′ , t′ )


i = − + V ′ (x′ , t′ )Ψ′ (x′ , t′ ), (154)
∂t′ Am 2 ∂x′2
where we have defined
x′ t′
 
′ ′ ′
Ψ (x , t ) = Ψ , (155)
B A
and
x′ t′
 
′ ′ 1 ′
V (x , t ) = V , . (156)
Aℏ B A
The last equation (156) means that the potential is scaled by Aℏ so that the new potential
V ′ is dimensionless (recall that ℏ has the dimension of [energy·time]).

Exercise 28. Derive the scaled TDSE (154) by applying the scaling in Eq. (153).

If we choose A and B such that B 2 /A = m/(κℏ) in Eq. (154), where κ is a (dimen-


sionless) proportionality constant, we obtain the dimensionless TDSE

∂Ψ′ (x′ , t′ ) 1 ∂ 2 Ψ′ (x′ , t′ )


i = − + V ′ (x′ , t′ )Ψ′ (x′ , t′ ). (157)
∂t′ 2κ ∂x′2
We see that κ takes the roll of the mass.
The precise choice of the scaling factors A and B depends on the system under study.
As a rule of thumb, A and B should be chosen so that the typical values of the scaled

40
coordinates and time become of the order of 1. For example, for a vibrational wave
function of a molecule, we might take A = 1 Å and B = 1 fs. It is not so good to use
SI units (meter and second) as the units in a simulation, since the numerical values of x′
and t′ would be very small.
A very common choice is quantum chemistry simulations is to use atomic units. In
this case, we scale the position with the Bohr radius a0 and the time with the atomic
unit of time t0 , defined as

a0 = √ ≈ 0.529 Å, (158)
me Eh

t0 = ≈ 24.2 × 10−18 s = 24.2 as, (159)
Eh
where Eh is the atomic unit of energy (one Hartree),
 2 2
me e
Eh = 2 ≈ 4.36 × 10−18 J ≈ 27.2 eV. (160)
ℏ 4πε0
The physical interpretation of t0 is that it is the time it takes for the electron to go around
the proton in the classical Bohr model is 2πt0 . The “h” in Eh stands for “Hartree”, after
D. R. Hartree (1897 – 1958), a British physicist. The scaling factors are

1 Eh 1 me Eh
A= = , B= = . (161)
t0 ℏ a0 ℏ

5.2 Finite difference method


The next step is to consider how the dimensionless TDSE (157) can be discretized, that
is, how can the partial differential equation (157) be mapped into something involving
only discrete numbers which can be represented on a computer? We will discuss two
approaches to this problem. The first is called the finite difference method. It is easiest
to discuss the discretization in the spatial coordinate x′ separately from the discretization
in time t′ . We will consider the discretization in x′ first. To begin with, let’s consider the
time-independent (dimensionless) Schrödinger equation,
1 d2 Ψ(x)
EΨ(x) = − + V (x)Ψ(x), (162)
2κ dx2
where E is the energy and V (x) is a time-independent potential. In Eq. (162), we have
dropped the primes from the variables x and t for easy notation, even though they are
dimensionless quantities.
In the finite difference method, the idea is to use the definition of the derivative
df (x) f (x + δ) − f (x)
= lim , (163)
dx δ→0 δ
but to keep a finite value of δ. An approximative expression for the derivative is
df (x) f (x + δ) − f (x) (1)
≈ = fδ (x), (164)
dx δ
which is accurate up to errors of order O(δ). In the Schrödinger equation, we need the
second derivative. A good approximation is here the symmetric formula
(1) (1)
d2 f (x) f (x) − fδ (x − δ) f (x + δ) − 2f (x) + f (x − δ)
2
≈ δ = . (165)
dx δ δ2

41
Exercise 29. Show that the error in the approximation (165) is of order O(δ 2 ) by
using the Taylor expansion f (x + δ) = f (x) + δf ′ (x) + δ 2 f ′′ (x)/2 + · · · and confirming
that all terms proportional to δ vanish.

To discretize the wave function, we first construct a grid of discrete points xj along
the x coordinate, according to

xj = x0 + jδ, j = 1, 2, . . . , jmax , (166)

where x0 is the boundary, δ is the spacing between the points, and jmax is the number of
points. We have x1 = δ, x2 = 2δ, etc. We then consider the wave function only at these
discrete points,
Ψj = Ψ(xj ), (167)
and the approximation to the kinetic energy term in the Schrödinger equation becomes

1 d2 Ψ(xj ) 1
− ≈ − [Ψ(xj + δ) − 2Ψ(xj ) + Ψ(xj − δ)]
2κ dx2 2κδ 2
1
=− (Ψj+1 − 2Ψj + Ψj−1 ). (168)
2κδ 2
If we want to evaluate the wave function at x values other than the xj ’s, we use interpo-
lation.
We also have to consider the boundary conditions. Usually, we have vanishing bound-
ary conditions, which means that the wave function vanishes at the boundaries,

Ψ(x0 ) = Ψ(xjmax +1 ) = 0. (169)

At the end points, we therefore have for the kinetic energy approximation

1 d2 Ψ(x1 ) 1
− 2
≈ =− (Ψ2 − 2Ψ1 ). (170)
2κ dx 2κδ 2
and
1 d2 Ψ(xjmax ) 1
− 2
≈ =− (Ψjmax −1 − 2Ψjmax ). (171)
2κ dx 2κδ 2
For the potential energy term we have simply

V (xj )Ψ(xj ) = Vj Ψj , (172)

where Vj = V (xj ). If we introduce the notation


 
Ψ1
 Ψ2 
Ψ =  .. , (173)
 
 . 
Ψjmax

then the discretized version of the SE (162) is written as

EΨ = HΨ, (174)

42
Name Commercial URL Comment
or freeware
Matlab commercial mathworks.com The University of Tokyo has a
campus license for students, see
utelecon.adm.u-
tokyo.ac.jp/en/matlab/
Octave freeware gnu.org/software/octave similar to Matlab
Mathematica commercial wolfram.com/mathematica Access in some labs; ECCS
NumPy freeware numpy.org/ Numerical calculations in
Python. Included in (for
example) the Anaconda distribu-
tion (anaconda.com)
Julia freeware julialang.org Rather new (since 2012); worth
trying

Table 2: Software packages for numerical computation.

where H is the matrix


H =T +V, (175)
constructed from the kinetic energy matrix
 
−2 1 0 0 0 ···
 1 −2 1 0 0 ··· 
1   0 1 −2 1 0 · · ·

T =− , (176)

2κδ 2  . . . . . . . . . . . .


 . . . . . . 
··· 0 0 0 1 −2
and the diagonal potential energy matrix
 
V1 0 0 ···
 0 V2 0 ··· 
V =  .. .. .. . (177)
 
 . ..
. . . 
· · · 0 0 Vjmax
Equation (174) can be solved numerically by a numerical linear algebra package, such as
LAPACK. Eigenvalue solvers (subroutines which can solve Eq. (174)) are included in
all scientific software packages, a few examples of which are listed in Table 2. There are
special algorithms which exploit the tridiagonal structure of the Hamiltonian matrix H.
We illustrate the finite difference method in Fig. 12, where we show the finite difference
approximations to the eigenfunctions of the harmonic oscillator Hamiltonian (written in
atomic units)
1 ∂2 1
HHO = − 2
+ κω 2 x2 . (178)
2κ ∂x 2
We take κ = 918 a.u. and ω = 0.02 a.u., which corresponds to a H2 molecule.
The finite difference approximation (168) is a second-order method, which means that
the relative error of for example the eigenenergy is expected to be proportional to δ 2 . The
relative error for the eigenenergy can be defined as
(FD) (exact)
En − En
∆= (exact)
, (179)
En

43
2

1
(x) (Å-1/2 )

0
n

-1

-2

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1


x (Å)

Figure 12: Eigenfunctions of the HO Hamiltonian, for n = 0 (blue lines), n = 1 (red


line), and n = 2 (yellow line). Filled circles: finite difference approximation with δ = 0.1
a.u., plotted at the grid points xj . Solid lines: exact eigenfunctions, corresponding to the
limit δ → 0.

(FD)
where En is the energy of the nth state calculated by the finite difference approxima-
(exact)
tion, and En is the exact eigenenergy.
(exact)
For the wave functions shown in Fig. 12, we know the exact eigenenergy, En =
ω(n + 1/2), and we can easily evaluate the error ∆. We obtain ∆(n = 0) ≈ 0.01,
∆(n = 1) ≈ 0.02, and ∆(n = 2) ≈ 0.03 for δ = 0.1 a.u. If we decrease δ by 10 times to
δ = 0.01 a.u., we get ∆(n = 0) ≈ 0.0001, ∆(n = 1) ≈ 0.0002, and ∆(n = 2) ≈ 0.0003,
showing that ∆ decreases by 100 times and is approximately proportional to δ 2 .
Better accuracy can be obtained if higher-order approximations to the second deriva-
tive are applied. A fourth-order approximation to the second derivative is
d2 f (x)
 
1 f (x − 2δ) 4f (x − δ) 5f (x) 4f (x + δ) f (x + 2δ)
≈ 2 − + − + − . (180)
dx2 δ 12 3 2 3 12
Although formula (180) is more accurate than (165), the form of the kinetic energy matrix
T becomes more complicated.

Exercise 30. Show that the error in the approximation (180) is of order O(δ 4 ) by
using the Taylor expansion f (x + δ) = f (x) + δf ′ (x) + δ 2 f ′′ (x)/2 + · · · and confirming
that all terms proportional to δ, δ 2 and δ 3 vanish.

5.3 Basis expansion method


The second discretization method we will consider is the basis expansion method. Again,
we consider the dimensionless SE,

EΨ(x) = HΨ(x), (181)

44
where
1 d2
H=− + V (x), (182)
2κ dx2
and assume that we have at our disposal a basis set: a finite set of known functions
fk (x), k = 1, 2, . . . , kmax . (183)
Sometimes we begin at k = 0 instead of k = 1. Usually, because this is most convenient,
we assume that the basis set is orthonormal, that is,
⟨fl |fk ⟩ = δlk . (184)
We also want the set of basis functions to become a complete set in the limit kmax → ∞.
For one-dimensional problems, a commonly used basis set is the set of harmonic oscillator
eigenfunctions ψkHO (x),
s
b 1 2
fk (x) = ψkHO (x) = k
√ hk (bx)e− 2 (bx) , (185)
k!2 π

where hk (x) is a Hermite polynomial (the first few are h0 (x) = 1, h1 (x) = 2x, and
h2 (x) = 4x2 − 2), and b is a scaling parameter (a small b leads to functions with a large
width, and a large b to narrow functions). For electronic structure calculations, the basis
functions are 3D Gaussian functions centered at the nuclei of the molecule. The point is
that all of the basis functions are well defined and can be easily evaluated.
We now expand the wave function in terms of the basis functions as
kX
max

Ψ(x) = ck fk (x), (186)


k=1

where the ck ’s are unknown coefficients. The idea of the basis function expansion method
is to find coefficients ck so that the expansion (186) becomes an as good approximation
to the exact solution as possible. In order to derive an equation for the coefficients ck , we
insert the expansion (186)R ∞ into the SE (181), multiply from the left side of the equation
by fl∗ (x) and integrate −∞ dx. The result is
kX
max

Ecl = Hlk ck , (187)


k=1

where
Hlk = ⟨fl |H|fk ⟩. (188)
If we define  
c1
 c2 
c= , (189)
 
..
 . 
ckmax
and  
H11 H12 H13 ···
 H21 H22 H23 ··· 
H= , (190)
 
.. .. .. ..
 . . . . 
. . . . . . Hkmax (kmax −1) Hkmax kmax

45
0.01

0.008
Energy (a.u.)

1-
0.006 1+

0.004
0-
0.002 0+

0
-1.5 -1 -0.5 0 0.5 1 1.5
x (a.u.)

Figure 13: Top panel: schematic illustration of the umbrella inversion of NH3 . Bottom
panel: potential energy curve for the double well potential. The four lowest eigenvalues
are indicated. Note that the two lowest levels, 0+ and 0− , cannot be distinguished at the
scale of the plot.

we can write Eq. (187) as the matrix equation

Ec = Hc. (191)

This is a discrete version of the SE, with the coefficient array c taking the role of the
wave function, and the matrix H taking the place of the Hamiltonian.

Exercise 31. Derive Eq. (187). You have to use the orthonormality (184) of the
basis set.

We illustrate the basis function expansion method by calculating the eigenfunctions


for the double well potential (see Fig. 13). This type of potential can be used to model
the umbrella inversion motion of ammonia (NH3 ), where the coordinate x represents the
distance between the N atom and the plane spanned by the H atoms, as is shown in
the top panel of Fig. 13. See also the discussion in Sec. 2.4 in Quantum Mechanics of
Molecular Structures by Yamanouchi.

46
1
(x) (a.u.)

(a) kmax = 1
0.5
0

0
-1.5 -1 -0.5 0 0.5 1 1.5
x (a.u.)

1
(x) (a.u.)

(b) kmax = 5
0.5
0

0
-1.5 -1 -0.5 0 0.5 1 1.5
x (a.u.)

1
(x) (a.u.)

(c) kmax = 21
0.5
0

0
-1.5 -1 -0.5 0 0.5 1 1.5
x (a.u.)

Figure 14: Dotted line: exact ground state eigenfunction of the double well potential.
Blue solid line: Approximate wave function calculated by the basis expansion method,
including (a) kmax = 1, (b) kmax = 5, and (c) kmax = 21 basis functions in the expansion
of the wave function.

47
The double well potential is modeled by the function (see the online textbook
chem.libretexts.org/link?150640 and [8])
1 2
V (x) = Kx2 + Be−Cx − V0 , (192)
2
with the parameters K = 0.07598 a.u., B = 0.05684 a.u., and C = 1.36696 a.u., and
V0 is a constant shift so that we have minx [V (x)] = 0. The reduced mass is taken
to be κ = 4668 a.u. With this model, the four lowest energy levels are obtained as
E0+ = 0.002319 a.u., E0− = 0.002323 a.u., E1+ = 0.006536 a.u., and E1− = 0.006711
a.u. The E0+ level and the E0− level are very close in energy. The difference is ∆E =
E0− −E0+ = 4.5×10−6 a.u. = 1.0 cm−1 , to be compared with the experimentally measured
value of ∆E = 0.79 cm−1 . The small splitting can be interpreted in terms of tunneling
between the two wells.
The performance of the basis function expansion method is illustrated in Fig. 14,
where we show the ground state wave function of the double well potential. The basis set
is here chosen as a harmonic oscillator basis set as in Eq. (185), with the scaling constant
b = 4.67 a.u. We can see in Fig. 14 that one basis function can obviously not describe the
double-hump character of the ground state, but with kmax = 5 basis functions, the double-
hump feature is reproduced, and with kmax = 21 basis functions, the agreement is perfect.
The largest contribution comes from the HO eigenfunctions with k = 0 (c0 = 0.5195)
and k = 2 (c2 = 0.8140). We note that because the double-well potential is symmetric
like V (x) = V (−x), only even HO basis functions satisfying fn (x) = fn (−x) contribute
to the sum (186).

Exercise 32. Discuss the role of scaling factor b in the harmonic oscillator basis set
Eq. (185) for the calculation of the eigenfunctions of the double-well potential. What
happens if b is very large? Very small? Is there an optimal value of b?

Which one of the two methods (the finite difference method and the basis function
expansion method) that is the best one depends on the problem at hand. The finite
difference method can be efficiently applied to low-dimensional (up to 3D) problems only,
since the size of the discrete matrix representation of the Hamiltonian otherwise becomes
too large. This can be seen by considering the following example. Suppose that we would
like to model two electrons moving in 3 dimensions. The wave function Ψ(r1 , r2 ) depends
on 6 coordinates (x, y, z for each electron). Suppose that we need 100 points along each
coordinate to discretize the wave function (this is a modest requirement). The number
of points in the discrete wave function would therefore be 1006 = 1012 . With double
precision (16 bytes per number), the wave function would occupy over 10 TB of memory!
For low-dimensional problems, the finite difference method is very efficient, and also easy
to code and understand.
On the other hand, the basis expansion method requires a good choice of basis to
be efficient and accurate. If we can find a good basis, we can sometimes get a good
approximation for the energy with only a few basis functions. In electronic structure
theory, many researchers have invested a lot of time over the years to find the best
Gaussian basis sets adapted to different kind of molecules.

48
5.4 Time evolution
We now consider how to evolve the wave function forward in time. We assume that
the spatial discretization of the Hamiltonian has been accomplished using one of the
methods described in sections 5.2 and 5.3, and that the (dimensionless) time-dependent
Schrödinger equation
∂Ψ(x, t)
i = H(t)Ψ(x, t) (193)
∂t
has been transformed into an ordinary differential equation in t,
dΨ(t)
i = H(t)Ψ(t). (194)
dt
In Eq. (194), Ψ is an array of coefficients representing the wave function,
 
Ψ1 (t)
 Ψ2 (t) 
Ψ(t) =  , (195)
 
..
 . 
Ψkmax (t)

and H(t) is the time-dependent Hamiltonian matrix. The normalization of the wave
function is expressed as
kX
max

Ψ∗k Ψk = Ψ† Ψ = 1, (196)
k=1

where the Hermitian conjugate of Ψ is denoted by †, that is,

Ψ† = Ψ∗1 (t), Ψ∗2 (t), . . . , Ψ∗kmax (t) .


 
(197)

The Hamiltonian matrix is Hermitian, and satisfies

H † (t) = H(t), or equivalently Hkl



(t) = Hlk (t). (198)

Equation (194) is a system of kmax ordinary differential equations in t, where kmax is


the number of basis functions (or the number of grid points). The solution is (in principle)
completely determined by the initial condition Ψ(t = t0 ) at t = t0 . The question is how
to obtain numerically an approximation to Ψ(t > t0 ).
The simplest approach would be to use the finite-difference approximation
dΨ(t) Ψ(t + ∆t) − Ψ(t)
≈ (199)
dt ∆t
for the time derivative, and insert this approximation into Eq. (194). Rearranging the
terms results in a formula for advancing the wave function from t to t + ∆t,

Ψ(t + ∆t) = [1 − i∆tH(t)] Ψ(t). (200)

The bold symbol 1 is used to denote the identity matrix. The formula (200), called
Euler’s method, is a simple formula for time propagation which requires only matrix
multiplication for obtaining the wave function at the next time step. The problem with
the formula (200) is that it requires very small time steps for stable time propagation.
To see this, we first assume for simplicity that the Hamiltonian is time-independent,

49
H(t) = H. Then we note that the expression for the wave function at a large time
t0 + n∆t (n ≫ 1) becomes according to Eq. (200)

Ψ(t0 + n∆t) = [1 − i∆tH]n Ψ(t0 ). (201)

We also assume that the eigenvalues ϵk and eigenvectors Φk of H are known,

HΦk = ϵk Φk , k = 1, 2, . . . , kmax . (202)

The eigenvectors are orthonormal,

Φ†k Φl = δkl . (203)

Because H has eigenvectors Φk and eigenenergies ϵk , the matrix 1 − i∆tH has eigenvec-
tors Φk (same as H) and eigenvalues 1 − i∆tϵk . Therefore, we can express [1 − i∆tH]n
as
kX
max
n
[1 − i∆tH] = (1 − i∆tϵk )n Φk Φ†k . (204)
k=1

If we now take the initial wave function to be an eigenstate of the Hamiltonian matrix,

Ψ(t0 ) = Φℓ (205)

for some ℓ ≤ kmax , Eq. (201) becomes

Ψ(t0 + n∆t) = (1 − i∆tϵℓ )n Φℓ . (206)

The norm of Ψ(t0 + n∆t) is


n 2
Ψ† (t0 + n∆t)Ψ(t0 + n∆t) = 1 + (∆tϵℓ )2 ≈ en(∆tϵℓ ) ,

(207)

which grows exponentially with n. We see that we must select ∆t so that


1
∆t2 ≪ , (208)
nϵ2ℓ

otherwise the norm of the wave function blows up and becomes very large. This limits
the applicability of Euler’s method (200).

Exercise 33. Derive Eq. (204) by using the eigenvector decomposition


kX
max

H= ϵk Φk Φ†k . (209)
k=1

Exercise 34. Derive Eq. (206) by using the orthonormality of the eigenfunctions
Φk .

50
A better method is called the Crank-Nicolson method [9]. This method, as we will see,
automatically conserves the norm Ψ† (t)Ψ(t) of the wave function regardless of the size
of the time step ∆t. To obtain the formula for the Crank-Nicolson method, we consider
again the Euler method,

Ψ(t + ∆t) = [1 − i∆tH(t)] Ψ(t). (210)

and also the backward Euler method

[1 + i∆tH(t + ∆t)] Ψ(t + ∆t) = Ψ(t). (211)

Note the difference between Eq. (210) and (211): (210) is an explicit method where the
wave function at the next time step can be calculated directly by matrix multiplication,
but (211) is an implicit equation in the sense that to obtain Ψ(t + ∆t) we have to solve
a system of linear equations, or in other words, we have to calculate the inverse of the
matrix [1 + i∆tH(t + ∆t)]. The Crank-Nicolson method is roughly speaking the average
of Eqs. (210) and (211),
   
i∆t i∆t
1+ H(t) Ψ(t + ∆t) = 1 − H(t) Ψ(t). (212)
2 2
Note that H is evaluated at the same time t on both sides of the equation. The Crank-
Nicolson method (212) is an implicit method since we have solve a system of linear
equations to obtain Ψ(t + ∆t) from Ψ(t), so it is slower that the direct Euler method for
same ∆t. However, Crank-Nicolson is still preferrable, since it is always stable, and, as
we show below, always preserves the norm of the wave function, regardless of the value of
∆t. Numerical solvers for matrix equations like Eq. (212) are implemented in all software
packages for numerical computation (see Table 2 on page 43).
We can write Eq. (212) as

Ψ(t + ∆t) = F+−1 F− Ψ(t), (213)

where we have defined  


i∆t
F± = 1 ± H(t) , (214)
2
and F±−1 is the inverse of the matrix F± . After one time step, the norm of the wave
function is †
Ψ† (t + ∆t)Ψ(t + ∆t) = Ψ† (t) F+−1 F− F+−1 F− Ψ(t). (215)
We can show (see the exercises 35–37 below) that
†
F+−1 F− F+−1 F− = 1, (216)

which means that F+−1 F− is a unitary matrix. Therefore, the norm of the wave function
is conserved at t + ∆t.

Exercise 35. Show that F+† = F− and F−† = F+ by using the Hermiticity of H
[see Eq. (198)].

51
†
Exercise 36. Show that F+−1 = F−−1 . You may use the expansion

X
−1 2 3
(1 + A) = 1 − A + A − A + ··· = (−A)k , (217)
k=0

which is (formally) valid for a general matrix A (provided that the inverse exists).

Exercise 37. Show that F+−1 and F− commute, that is, F+−1 F− = F− F+−1 . The
expansion (217) may be useful also in this exercise.

Exercise 38. Confirm that the results shown in exercises 35–37 imply Eq. (216).

52
6 Physical constants and unit conversions
For reference, we give the numerical values of a few useful physical constant used in
these lecture notes, and the way to convert between commonly used units. Source: NIST
website, physics.nist.gov/cuu/Constants/index.html. Numbers in parenthesis indicate
the experimental uncertainty.
Planck’s constant (the value in J s is exact)

h = 6.62607015 × 10−34 J s ≈ 4.135667696 × 10−15 eV s. (218)


exact

Reduced Planck’ constant ℏ = h/(2π)

ℏ ≈ 1.054571817 × 10−34 J s ≈ 6.582119569 × 10−16 eV s. (219)

Speed of light (exact value)


c = 299792458 m/s. (220)
exact

Bohr radius a0 = 4πε0 ℏ2 /(me e2 ) (atomic unit of length)

a0 ≈ 5.29177210903(80) × 10−11 m ≈ 0.529177211 Å. (221)

Hartree energy Eh = e2 /(4πε0 a0 ) ≈ 2 times the ionization energy of H (atomic unit of


energy)

Eh ≈ 27.211386245988(53) eV ≈ 4.3597447222071(85) × 10−18 J. (222)


12
Electron mass (u = Dalton = 1/12 of the mass of C)

me ≈ 9.1093837015(28) × 10−31 kg ≈ 5.48579909065(16) × 10−4 u. (223)

Proton mass

mp = 1.67262192369(51) × 10−27 kg = 1.007276466621(53) u = 1836.15267343(11)me .


(224)
Atomic unit of time t0 = ℏ/Eh

t0 ≈ 2.418884325 × 10−17 s ≈ 24.2 as. (225)

Atomic unit of electric field E0 = Eh /(ea0 )

E0 ≈ 5.1422067445 × 1011 V/m ≈ 51.4 V/Å. (226)

Energy conversion

1 eV = 1.602176634 × 10−19 J ≈ (8065.543937 cm−1 )hc. (227)


exact

Dipole moment conversion (D = Debye)

1 D ≈ 3.335640952 × 10−30 C m ≈ 0.2081943327 eÅ ≈ 0.3934302695 ea0 . (228)

53
7 References
[1] M. Born and R. Oppenheimer, Zur Quantentheorie der Molekeln, Ann. Phys. 389,
457 (1927).

[2] M. W. Schmidt, K. K. Baldridge, J. A. Boatz, S. T. Elbert, M. S. Gordon, J. H.


Jensen, S. Koseki, N. Matsunaga, K. A. Nguyen, S. Su, T. L. Windus, M. Dupuis,
and J. A. Montgomery, Jr., General atomic and molecular electronic structure system,
J. Comput. Chem. 14, 1347 (1993).

[3] T. Helgaker, P. Jørgensen, and J. Olsen, Molecular Electronic-Structure Theory (Wi-


ley, Hoboken, NJ, 2000).

[4] A. Szabo and N. S. Ostlund, Modern Quantum Chemistry (Dover Publications, New
York, 1996).

[5] M. Born and K. Huang, Dynamical Theory of Crystal Lattices (Oxford Univ. Press,
Oxford, 1954).

[6] B. Friedrich and D. Herschbach, Spatial orientation of molecules in strong electric


fields and evidence for pendular states, Nature 353, 412 (1991).

[7] Y. Sato, H. Kono, S. Koseki, and Y. Fujimura, Description of Molecular Dynamics in


Intense Laser Fields by the Time-Dependent Adiabatic State Approach: Application
to Simultaneous Two-Bond Dissociation of CO2 and Its Control, J. Am. Chem. Soc.
125, 8019 (2003).

[8] J. D. Swalen and J. A. Ibers, Potential Function for the Inversion of Ammonia, J.
Chem. Phys. 36, 1914 (1962).

[9] J. Crank and P. Nicolson, A practical method for numerical evaluation of solutions
of partial differential equations of the heat-conduction type, Proc. Camb. Phil. Soc.
43, 50 (1947), reprinted in Advances in Computational Mathematics 1996, Volume
6, Issue 1, pp 207-226.

54

You might also like