Hamiltonian Mechanics
Hamiltonian Mechanics
Hamiltonian Mechanics
Both Newtonian and Lagrangian formalisms operate with systems of second-order differential equations
for time-dependent generalized coordinates, q̈i = . . .. For a system with N degrees of freedom, N such
equations can be reformulated as systems of 2N first-order differential equations if one considers velocities
vi = q̇i as additional dynamical variables. This system of equations has the form q̇i = vi , v̇i = . . . that is
non-symmetric with respect to qi and vi .
Hamiltonian formalism uses qi and pi as dynamical variables, where pi are generalized momenta defined
by
∂L
pi = . (0.1)
∂ q̇i
The resulting 2N Hamiltonian equations of motion for qi and pi have an elegant symmetric form that is
the reason for calling them canonical equations. Although for most of mechanical problems Hamiltonian
formalism is of no practical advantage, it is worth studying because of the similarity between its mathematical
structure and that of quantum mechanics. In fact, a significant part of quantum mechanics using matrix
and operator algebra grew out of Hamiltonian mechanics. The latter is invoked in constructing new field
theories. Hamiltonian formalism finds application in statistical physics, too.
that with the help of Eqs. (0.1) and (1.1) can be rewritten as
X
dL = (ṗi dqi + pi dq̇i ) . (1.3)
i
One can see that the natural variables of H are q and p and
∂H ∂H
q̇i = , ṗi = − (1.6)
∂pi ∂qi
that are Hamiltonian equations of motion for q and p.
1
As the Lagrange function is bilinear in q̇i ,
1X
L = Ek (q, q̇) − U (q), Ek = aij (q)q̇i q̇j , (1.7)
2
ij
On the other hand, for systems with constraints, such as double pendulum, different generalized velocities q̇i
couple since aij ≡ (A)ij in Eq. (1.7) is a non-diagonal matrix. Eq. (1.7) can be written in the matrix-vector
form
1
Ek = q̇T · A · q̇, (1.11)
2
where q̇ = (q̇1 , q̇2 , . . .) is a column and the transposed q̇T is a row. Further, from Eqs. (0.1) and (1.7) one
obtains
p = A · q̇ (1.12)
and thus T T
q̇ = A−1 · p, q̇T = A−1 ·p = pT · A−1 = pT · A−1 , (1.13)
where we have taken into account that A−1 , as well as A, is a symmetric matrix. Plugging this result into
Eq. (1.11) one obtains
1
H = pT · A−1 ·p + U. (1.14)
2
Although inverting kinetic-energy matrix A is possible, at least numerically, it makes Hamiltonian formalism
less appropriate. For this reason it is never used for systems with constraints relevant in engineering, exactly
where Lagrangian approach shows its strength.
Let us, as an illustration, find the Hamiltonian of a particle in spherical coordinates. Kinetic energy has
the form
mv 2
2
m 2 2 2
m 2 2
Ek = = v + vθ + vφ = ṙ + rθ̇ + r sin θ φ̇ . (1.15)
2 2 r 2
Thus the generalized momenta (0.1) are given by
∂Ek
pr = = mṙ
∂ ṙ
∂Ek
pθ = = mr2 θ̇
∂ θ̇
∂Ek
pφ = = mr2 sin2 θ φ̇. (1.16)
∂ φ̇
2
Inserting the results into Eq. (1.15) one obtains
!
1 p2 p2φ
H = Ek + U = p2r + 2θ + 2 2 + U (r, θ, φ). (1.18)
2m r r sin θ
∂H ∂H ∂H
ṙ = , θ̇ = , φ̇ = . (1.19)
∂pr ∂pθ ∂pφ
Here, in contrast to Lagrange formalism, one considers both pi and qi as independent dynamical variables
and makes a variation
qi ⇒ qi + δqi , pi ⇒ pi + δpi . (2.2)
The corresponding variation of the action has the form
X ˆ t2 ∂H ∂H
δS = δpi dqi + pi dδqi − δqi dt − δpi dt . (2.3)
t1 ∂qi ∂pi
i
Integrating the second term by parts and using that δqi = 0 at the begining and at the end, one can rewrite
X ˆ t2
this as
∂H ∂H
δS = δpi dqi − dt + δqi −dpi − dt . (2.4)
t1 ∂pi ∂qi
i
Since for the actual trajectories δS = 0 for arbitrary and independent δqi and δpi , one concludes that both
expressions in round brackets are zero that leads to Hamilton equations (1.6).
df ∂f
= + {f, H} , (3.2)
dt ∂t
where {f, H} is a Poisson bracket defined by
X ∂f ∂g ∂f ∂g
{f, g} ≡ − . (3.3)
∂qi ∂pi ∂pi ∂qi
i
3
One can see that if f does not depend on time explicitly and its Poisson bracket with the Hamiltonian is
zero, f is an integral of motion. In particular, obviously {H, H} = 0, so that in the absence of explicit time
dependence the energy is an integral of motion, H = E = const. Note the similarity between Eq. (3.2)
and the quantum
h imechanical equation of motion for an operator in Heisenberg representation where the
commutator f, Ĥ replaces Poisson bracket.
Poisson brackets satisfy obvious relations such as
that we quote without proof here. It follows then that if both f and g are integrals of motion, then {f, g}
is also an integral of motion. For the proof we take h = H, then Jacobi identity takes the form
f, {g, H} + {H, {f, g}} + g, {H, f } = {H, {f, g}} = 0, (3.6)
| {z }
| {z }
0 0
thus {f, g} = const. In principle, this could be used to obtain new integrals of motion from already known
“old” integrals of motion. However, in most cases new integrals of motion {f, g} are trivial, such as functions
of the old integrals of motion.
The straightforwardly obtainable Poisson brackets
F (q1 , p1 ) is an integral of motion. This can be proven with the help of Poisson brackets. Since differentiation
with respect to the variables i 6= 1 gives zero, one obtains
∂F ∂H ∂F ∂H ∂H ∂F ∂F ∂F ∂F ∂H
Ḟ = {F, H} = − = − = {F, F } = 0. (3.9)
∂q1 ∂p1 ∂p1 ∂q1 ∂F ∂q1 ∂p1 ∂p1 ∂q1 ∂F
This result can be generalized for nested dependences of the type
H ({qi , pi }) = H {qi , pi }i6=1,2 ; F2 (q2 , p2 ; F1 (q1 , p1 )) . (3.10)
Here
F1 (q1 , p1 ) = α1 , F2 (q2 , p2 ; α1 ) = α2 (3.11)
are integrals of motion. For F2 one obtains
∂F2 ∂H ∂F2 ∂H ∂F2 ∂H ∂F2 ∂H
Ḟ2 = {F2 , H} = − + −
∂q1 ∂p1 ∂p1 ∂q1 ∂q2 ∂p2 ∂p2 ∂q2
2
∂F2 ∂H ∂H
= {F1 , F1 } + {F2 , F2 } = 0. (3.12)
∂F1 ∂F2 ∂F2
4
3.1 Example: Particle’s motion in spherical coordinates
As an example of nontrivial integrals of motion detectable with the help of Poisson brackets consider a
particle moving in the potential
b(θ) c(φ)
U (r, θ, φ) = a(r) + 2
+ 2 2 (3.13)
r r sin θ
in the spherical coordinates. In this case with the help of Eq. (1.18) one obtains the Hamiltonian of the
nested form " #
2
1 2 p2θ + 2mb(θ) pφ + 2mc(φ)
H= p + 2ma(r) + + . (3.14)
2m r r2 r2 sin2 θ
In accordance with the above, one has integrals of motion
p2φ + 2mc(φ) = αφ (3.15)
and
αφ
p2θ + 2mb(θ) + = αθ . (3.16)
sin2 θ
If c(φ) = 0, φ is cyclic variable and the first integral of motion simplifies to pφ = const. With these two
integrals of motion, the problem becomes effectively one-dimensional with respect to r,
1 h 2 αθ i
H= pr + 2ma(r) + 2 , (3.17)
2m r
and can be integrated directly using energy conservation, H = E = const.
Integration of the whole problem can be done in three steps. From Eq. (3.14) one obtains
p
pr = mṙ = 2mE − 2ma(r) − αθ /r2 (3.18)
that can be integrated ˆ
dr
t=m p (3.19)
2mE − 2ma(r) − αθ /r2
to implicitly find r(t).
Second, θ(t) can be defined using the integral of motion αθ , Eq. (3.16):
r
2 αφ
pθ = mr θ̇ = αθ − 2mb(θ) − . (3.20)
sin2 θ
This equation can be integrated to define θ(t) implicitly via
ˆ ˆ
dθ dt
= . (3.21)
mr2 (t)
q
αθ − 2mb(θ) − αφ / sin2 θ
Note that to work out the integral in the rhs, one at first has to find r(t) from Eq. (3.19).
Finally, φ(t) can be defined using the integral of motion αφ
q
2 2
pφ = mr sin θ φ̇ = αφ − 2mc(φ). (3.22)
5
3.2 Poisson brackets and commutators
Let us finally work out the correspondence between Poisson brackets and commutators. In quantum me-
chanics the fundamental commutators
d
q̂i = qi , p̂i = −i~ , [q̂i , p̂j ] = i~δij (3.24)
dqi
are similar to the fundamental Poisson brackets, Eq. (3.7). Operator functions like f (q̂, p̂) that correspond
to classical functions f (q, p) are interpreted as Taylor series with full symmetrization over permutations of
terms. The commutator [f, g] can be calculated, for instance, by expanding both f and g in Taylor series
in q̂ and p̂, commuting term by term, and then summing the series back. The final result of this procedure
can be obtained much easier by formally substituting
X ∂f ∂f
X
∂g ∂g
[f, g] ⇒ q̂i + p̂i , q̂j + p̂j (3.25)
∂ q̂i ∂ p̂i ∂ q̂j ∂ p̂j
i j
and then commuting the momentum and coordinate operators considering partial derivatives as numbers.
With the help of Eq. (3.24) this yields the relation
In this formula the Poisson bracket should be symmetrized over permutations, ∂q̂ f ∂p̂ g ⇒ (∂q̂ f ∂p̂ g + ∂p̂ g ∂q̂ f ) /2
etc., since q̂ and p̂ are operators. If ~ is formally considered as small, as is sometimes done in the analysis
of the semiclassical case, the symmetrization is irrelevant. This is because each commutation introduces a
factor ~, so that changing the order of terms changes the result by terms starting with ~2 .
4 Canonical transformations
As any system of differential equations, Hamiltonian equations (1.6) allow change of variables
where q, p are “old” variables and Q, P are “new” variables. The transformation of variables above is called
canonical if the transformed Hamiltonian equations also have a Hamiltonian (canonical) form
∂H0 ∂H0
Q̇i = , Ṗi = − , (4.2)
∂Pi ∂Qi
where, in principle, H0 can differ from H. Sometimes one can find a canonical transformation that results in
a simple Hamiltonian function allowing for an easy solution. In particular, if H0 does not depend on Qi , the
variable Qi is called cyclic. The second of the above equations then yields Ṗi = 0 and Pi = const. Now, in
the case of one degree of freedom, the rhs of the first equation above is an integral of motion, Q̇ = α = const.
This equation can be easily integrated, Q = αt + const.
6
actions coincide, so that δS = 0 for the old variables entails δS = 0 for the new variables. Then validity of
Eq. (1.6) entails validity of Eq. (4.2). The difference between the two action integrands having the form
X X
Pi dQi + H0 − H dt = dF
pi dqi − (4.3)
i i
7
or
Qi = pi , Pi = −qi . (4.13)
This transformation interchanges generalized coordinates and momenta. The above example shows that
there is no essential difference between generalized coordinates and momenta in the Hamiltonian formalism.
One cannot say that generalized momenta are related to velocities while generalized coordinates not.
It should be noted that for the above two transformations that are obviously canonical, an attempt to
make Legendre transformation (4.5) does not work. For instance, one cannot find the primary form F (q, Q)
of the transformation, the secondary form Φ(q, P ) of which is given by Eq. (4.8). Since only F (q, Q) follows
from the least-action principle and it does not exist here, Φ(q, P ) loses its relation to the general formalism.
It looks like there are more canonical transformations than those following from the least-action principle.
Here summation over repeated indices is assumed. Eqiations for Qi are canonical, if
Index qp shows that Poisson brackets are calculated with respect to the old (original) variables. The above
means that fundamental Poisson brackets have the same form as those for the old variables, Eq. (3.7).
However, Eq. (3.7) is trivial, whereas Eqs. (4.15) and (4.16) are not. As transformation can be done also
in the opposite direction, (Q, P ) ⇒ (q, p), by the same method one obtains
as canonicity criterion.
p2 kq 2 1
p2 + m2 ω 2 q 2 ,
H= + = (4.18)
2m 2 2m
p
where ω = k/m is the oscillator’s eigenfrequency. Quadratic dependence on q and p suggests to use a
transformation of the type
f (P )
q= sin Q, p = f (P ) cos Q (4.19)
mω
8
that leads to the transformed Hamiltonian
f 2 (P ) f 2 (P )
H= cos2 Q + sin2 Q = (4.20)
2m 2m
that is independent of Q. Here function f (P ) can be obtained from the Poisson-brackets canonicity criterion,
Eq. (4.17). In particular, one should have
∂q ∂p ∂q ∂p f (P ) df (P ) 1 df (P )
1 = − = cos Q × cos Q + sin Q × f (P ) sin Q
∂Q ∂P ∂P ∂Q mω dP mω dP
f (P ) df (P ) 1 d 2
= = f (P ). (4.21)
mω dP 2mω dP
Integrating this equation, one obtains √
f (P ) = 2P mω. (4.22)
This yields
√
r
2P
q= sin Q, p= 2P mω cos Q. (4.23)
mω
Now the new Hamiltonian has the form
H = ωP. (4.24)
The generalized momentum
H E
P = = (4.25)
ω ω
is conserved. The interpretation of P is action over the period of motion I divided by 2π, see Eq. (5.37).
The equation of motion for the cyclic variable Q is
∂H
Q̇ = = ω. (4.26)
∂P
Its solution reads
Q = ωt + ϕ0 , (4.27)
where ϕ0 = const. One can see that Q is the oscillator’s phase angle. Inserting the above results into Eq.
(4.23), one finally obtains the solution
√
r
2E
q= 2
sin (ωt + ϕ0 ) , p = 2mE cos (ωt + ϕ0 ) . (4.28)
mω
Generating function of the above canonical transformation can be found by integrating the equations
∂F ∂F
p= , P =− . (4.29)
∂q ∂Q
Before this, one has to express p or P via q and Q with the help of Eq. (4.23). For instance,
p2
mωq 2 + = mωq 2 + 2P cos2 Q = 2P, (4.30)
mω
thus
mωq 2 1
P = . (4.31)
2 sin2 Q
Integrating this equation and discarding the integration constant one obtains
mωq 2
F (q, Q) = cot Q. (4.32)
2
From this follows
∂F ∂F mωq 2 1
p= = mωq cot Q, P =− = . (4.33)
∂q ∂Q 2 sin2 Q
Resolving the second of these equations for q and then substituting the result into the first equation, one
obtains Eq. (4.23).
9
4.5 Symplectic formalism
Hamiltonian equations can be put into a compact and elegant symplectic form. Introducing the dynamical
vector
x = {x1 , x2 , . . . , x2N } = {q1 , q2 , . . . , qN , p1 , p2 , . . . , pN } (4.34)
one can write Hamiltonian equations (1.6) in the form
∂H ∂H
ẋ = J · , ẋi = Jij , (4.35)
∂x ∂xj
where summation over repeated indices is assumed and matrix J is given by
Let us now perform a canonical transformation (4.1) without time dependence and introduce the new
dynamical vector
y = {y1 , y2 , . . . , y2N } = {Q1 , Q2 , . . . , QN , P1 , P2 , . . . , PN } . (4.37)
The equation of motion for y follows from Eq. (4.35)
∂yi ∂yi ∂H ∂yi ∂yl ∂H
ẏi = ẋj = Jjk = Jjk . (4.38)
∂xj ∂xj ∂xk ∂xj ∂xk ∂yl
The condition that the resulting equation is Hamiltonian and thus the transformation is canonical in the
vector and component forms reads
∂yi ∂yl
M · J · MT = J, Jjk = Jil , (4.39)
∂xj ∂xk
where Mij ≡ ∂yi /∂xj is the Jacobian matrix of the transformation. The condition above contains all four
conditions of the standard formalism, Eqs. (4.15) and (4.16). One can prove the inverse and more general
statement: If the transformation is canonical, Poisson brackets of any two variables A and B are invariant
with respect to the transformation. The proof uses Eq. (4.39):
∂A ∂B ∂A ∂yk ∂yl ∂B ∂A ∂B
{A, B}x = Jij = Jij = Jkl = {A, B}y . (4.40)
∂xi ∂xj ∂yk ∂xi ∂xj ∂yl ∂yl ∂yl
10
5 Action as function of coordinates and Hamilton-Jacobi equation
5.1 General formulation with a simple example
Action S given by Eq. (2.1) was used to derive Lagrangian and Hamiltonian equations of motion from the
least-action principle, δS = 0. This condition singles out the real physical trajectory from all other competing
trajectories. Here we will consider the action for the real trajectory as function of the upper-limit variables
t2 ⇒ t and q2 ⇒ q. The expression for S in the Hamiltonian form, the last term in Eq. (2.1), shows that
as qi and t change in the course of motion, action acquires corresponding increments, so that infinitesimally
one has X
dS = pi dqi − Hdt. (5.1)
i
Equations (5.2) can be used to set up the famous Hamilton-Jacobi equation that together with canonical
transformations is an efficient tool for finding analytical solutions of mechanical problems. Hamilton-Jacobi
equation
∂S ∂S
+ H q, ,t = 0 (5.7)
∂t ∂q
is a nonlinear first-order partial differential equation (PDE) for the function S (q, t) . As usual, q and ∂S/∂q
in the arguments stand for the whole sets of qi and ∂S/∂qi = pi . For practical purposes it is sufficient to
find just some solution of Hamilton-Jacobi equation rather than its most general solution.
In particular, for the free particle considered above, Eq. (5.7) becomes
2
∂S 1 ∂S
+ = 0. (5.8)
∂t 2m ∂q
11
The solution of this equation can be searched for in the form
p2 √
S (q, t) = pq − t = 2mEq − Et, (5.13)
2m
up to an irrelevant constant. One can check that this result coincides with Eq. (5.3):
p2 mq mq 2 t mq 2
S (q, t) = pq − t= q− = . (5.14)
2m t t 2m 2t
Whereas the solution of Hamilton-Jacobi equation for one degree of freedom such as Eq. (5.13) depends on
one constant, the so-called complete integral of Hamilton-Jacobi equation for N degrees of freedom depends
on N constants that we call Pi . On the top of it, one can always add an irrelevant constant to S (q, t) that
has been suppressed in Eq. (5.13). The complete integral yields the solution for the system’s dynamics if
one uses it as the generating function of a canonical transformation in terms of the old coordinates and new
momenta, Φ (q, P, t) of Eq. (4.5). The new momenta are constants Pi in the complete integral. The new
Hamiltonian H0 given by Eq. (4.7) vanishes according to Hamilton-Jacobi equation:
∂Φ ∂S
H0 = H + =H+ = H − H = 0. (5.15)
∂t ∂t
Thus Hamiltonian equations for the new dynamic variables Qi and Pi become trivial:
Q̇i = 0 =⇒ Qi = const
Ṗi = 0 =⇒ Pi = const. (5.16)
Time dependences of the old variables qi and pi can be obtained from the first two equations (4.7). At first
qi are found resolving the equations
∂S
Qi = . (5.17)
∂Pi
Then pi are given by the formulas
∂S
pi = . (5.18)
∂qi
Some literature uses αi and βi as new momenta and coordinates within Hamilton-Jacobi formalism, αi ≡ Pi
and βi ≡ Qi .
Let us illustrate finding dynamical solution for the free particle. One can choose constant p in the first
expression in Eq. (5.13) as the new momentum P,
P2
S (q, P, t) = P q − t. (5.19)
2m
12
Equation (5.17) takes the form
∂S P
Q= =q− t (5.20)
∂P m
that yields the solution
P P
q= t + Q = t + const. (5.21)
m m
Then momentum is defined by Eq. (5.18):
∂S
p= = P = const. (5.22)
∂q
Thus Eq. (5.21) reproduces the well-known solution q = (p/m) t + const for a free particle.
Alternatively one can use the second expression in Eq. (5.13) and choose energy E as conserved new
momentum P . Now instead of Eq. (5.20) one obtains
r
∂S m
Q= = q−t (5.23)
∂E 2E
that yields the familiar solution
r r r
2E 2E 2E
q= t+ Q= t + const. (5.24)
m m m
The old momentum is given by the familiar formula
∂S √
p= = 2mE. (5.25)
∂q
This alternative solution using energy rather than momentum as conserved new momentum is preferred
because it survives for systems with nontrivial potential energy where momentum is not conserved. As an
example we will consider the harmonic oscillator in the next section.
∂S0 2
1
+ U (q) = E. (5.27)
2m ∂q
Resolving this equation and integrating, one obtains
ˆ q p
S (q, E, t) = dq 0 2m [E − U (q 0 )] − Et. (5.28)
Using this as generating function Φ (q, P, t) with P = E, one obtains the implicit formula for q(t)
ˆ q r
∂S m
Q= = dq 0 − t. (5.29)
∂E 2 [E − U (q 0 )]
For the harmonic oscillator one can calculate the integral analytically as follows
ˆ q r ˆ
m 1 q̃ dq̃ 0 1
Q= dq 0 2 02
− t = p − t = arcsin q̃ − t, (5.30)
2E − mω q ω 1 − q̃ 02 ω
13
Figure 5.1: Short action of the harmonic oscillator
where r
mω 2
q̃ ≡ q. (5.31)
2E
Inverting Eq. (5.30), one obtains the well-known solution
or r
2E
q= sin (ωt + ϕ0 ) . (5.33)
mω 2
After that one finds p as
∂S p
p= = 2m [E − U (q)] (5.34)
∂q
that for the harmonic oscillator yields the well-known expression
√ p √
p = 2mE 1 − q̃ 2 = 2mE cos (ωt + ϕ0 ) . (5.35)
Finally we calculate the short action S0 , the integral term in Eq. (5.28). The result has the form
p E
S0 (q) = I q̃ 1 − q̃ 2 + arcsin q̃ , I= . (5.36)
ω
Here I is the harmonic-oscillator form of the so-called action variable that will be used below. S0 (q) is a
√
multivalued function of q, taking into account different branches of the . . . and arcsin (. . .) functions, see
Fig. 5.1. As q̃ changes from 0 to 1 that is the quarter of the oscillation period, the expression in brackets
changes from 0 to π/2. The same happens each quarter period, so that the change over the period makes
up 2π. Thus short action over the period is given by
(Period)
∆S0 = 2πI. (5.37)
Unlike short action S0 , the full action S does not increase with time on average. ´Since L = Ek − U and
both kinetic and potential energies oscillate having the same average values, S = dtL oscillates without
growing.
14
5.3 Separation of variables
The method of solving Hamilton-Jacobi equation applied to the harmonic oscillator in the preceding section
works for any system with one degree of freedom, described by a potential energy U (q). This method can be
generalized for systems with N degrees of freedom than allow separation of variables. In such systems one or
more canonical pairs (qi , pi ) enter the Hamiltonian H as combinations that do not contain other dynamical
variables. In the case of one such pair the Hamilton-Jacobi equation has the form
∂S ∂S ∂S
+ H qi6=1 , , F1 q1 , , t = 0. (5.38)
∂t ∂qi6=1 ∂q1
The solution can be searched for in the form of the sum
(1)
S = S (N −1) (qi6=1 , t) + S0 (q1 ) . (5.39)
With this Ansatz Eq. (5.38) becomes
! !
(1)
∂S (N −1) ∂S (N −1) dS
+ H qi6=1 , , F1 q1 , 0 ,t = 0. (5.40)
∂t ∂qi6=1 dq1
Since this equation has to be valid for any value of q1 , condition F1 = const = α1 should be fulfilled. Thus
Eq. (5.40) splits up into two equations
!
(1)
dS0
F 1 q1 , = α1
dq1
!
∂S (N −1) ∂S (N −1)
+ H qi6=1 , , α1 , t = 0. (5.41)
∂t ∂qi6=1
This is why this situation is called separation of variables. The first equation here is an ordinary differential
equation that allows for a solution in quadratures. The second equation is a Hamilton-Jacobi equation with
N − 1 degrees of freedom.
If the problem is time independent, one can search for the solution in the form
(N −1) (1)
S = −Et + S0 (qi6=1 ) + S0 (q1 ) . (5.42)
This results in simpler equations
!
(1)
∂S
F1 q1 , 0 = α1
∂q1
(N −1)
!
∂S
H qi6=1 , 0 , α1 = E. (5.43)
∂qi6=1
A particular case of separation of variables is the case of a cyclic variable q1 that does not enter the
(1) (1)
Hamiltonian. Then F1 q1 , ∂S0 /∂q1 reduces to ∂S0 /∂q1 , so that the first equation (5.41) can be easily
integrated and Eq. (5.39) becomes
S = S (N −1) (qi6=1 , t) + α1 q1 . (5.44)
Time also is a cyclic variable, and −Et in Eq. (5.42) is similar to α1 q1 in the above equation.
If there is another separating variable q2 , the second equation (5.41) can be further simplified in terms of
(2)
φ2 , S0 , and the remainder action S (N −2) . If all N variables separate, the procedure results in the complete
integral of Hamilton-Jacobi equation of the completely additive form
N
(i)
X
(0)
S=S (t) + S0 (qi , {αi }) , (5.45)
i
15
whereas S (0) satisfies the equation
∂S (0)
+ H (α1 , α2 , . . . , αN , t) = 0 (5.46)
∂t
that also can be solved in quadratures. Each term of Eq. (5.45) depends on one or more constants αi . For
time-independent problems one obtains
The complete integral above can be used now to define system’s dynamics as was explained in Sec. 5.1, see
Eqs. (5.17) and (5.18). Let us write these equations again:
∂S
βi = , βi = const. (5.48)
∂αi
(i)
Note the equivalence αi ⇔ Pi and βi ⇔ Qi . As, in general, S0 (qi , {αi }) depends not only on its own
constant αi but on other constants as well, these equations for different i are coupled. For simpler problems
(i) (i)
such as three-dimensional harmonic oscillator, one has S0 (qi , {αi }) = S0 (qi , αi ) , and different equations
above are uncoupled.
b(θ) c(φ)
U (r, θ, φ) = a(r) + 2
+ 2 2 (5.49)
r r sin θ
and integrals of motion can be found with the help of Poisson brackets, after which the problem can be
integrated. Now we solve the same problem by Hamilton-Jacobi formalism. Hamilton-Jacobi equation has
the form
2 ( " #)
∂S 1 ∂S 1 ∂S 2 1 ∂S 2
+ + a(r) + + 2mb(θ) + + 2mc(φ) = 0. (5.50)
∂t 2m ∂r 2mr2 ∂θ sin2 θ ∂φ
∂Sφ 2
+ 2mc(φ) = αφ = const
∂φ
∂Sθ 2
αφ
+ 2mb(θ) + = αθ = const (5.52)
∂θ sin2 θ
∂Sr 2
αθ
+ 2ma(r) + 2 = 2mE = const. (5.53)
∂r r
Note that Sθ depends on constants αφ and αθ , whereas Sr depends on αθ and E. Integration of these
equations yields
ˆ r ˆ r ˆ
αθ αφ q
S = −Et + dr 2mE − 2ma(r) − 2 + dθ αθ − 2mb(θ) − + dφ αφ − 2mc(φ) (5.54)
r sin2 θ
16
that depends on three constants E, αφ , and αθ . Differentiating S with respect to these constants and
equating the results to other constants, one obtains equations of motion in the implicit form that include
six constants, as it should be. These equations are the following:
ˆ
∂S mdr
βr = = −t + p
∂E 2mE − 2ma(r) − αθ /r2
ˆ ˆ
∂S dθ dr
βθ = = q − p
∂αθ 2 αθ − 2mb(θ) − αφ / sin2 θ 2r 2m [E − a(r)] − αθ /r2
2
ˆ ˆ
∂S dφ dθ
βφ = = p − q . (5.55)
∂αφ 2 αφ − 2mc(φ) 2 sin2 θ αθ − 2mb(θ) − αφ / sin2 θ
The first of these equations after integration and solving for r yields r(t). Then the second equation after
integration and solving for θ yields θ(t), expressed via r(t). Finally, the third equation yields the solution
φ(t), expressed via θ(t) that is in turn expressed via r(t). The solution above is equivalent to the solution
using Poisson brackets in Sec. 3.1 that is much less involved.
Practical usefullness of these solutions is questionable since, in general, the integrals cannot be done
analytically. In this case direct numerical solution of the original problem is much simpler than numerical
treatment of these integrals.
17
where ω = 2π/T is the frequency of motion, T = T (E) being the period of motion. The proof of the last
relation is the following
˛ p ˛ r ˛ ˛
dI d 1 1 m 1 dq 1 T 1
= 2m [E − U (q)]dq = dq = = dt = = . (6.6)
dE dE 2π 2π 2 [E − U (q)] 2π q̇ 2π 2π ω
Note the difference between the relations ω = dE/dI and ω = E/I in Eq. (5.36). The latter is valid only for
a harmonic oscillator. The choice of the action variable I above anticipates the nice result ϕ̇ = ω that allows
to interpret ϕ as the phase or the angle of the periodic motion that linearly grows with time, ϕ(t) = ωt + ϕ0 .
Time dependence of q can be found from Eq. (6.4).
For the harmonic oscillator, S0 (q) and the relation between E and I are given by Eqs. (5.36) and (5.37).
It can be obtained by integration of dE/dI = ω in the second equation (6.5) with ω = const. The relation
between the (q, p) and (ϕ, I) variables that follows from Eq. (6.4) reads
√
r
2I
q= sin ϕ, p = 2mIω cos ϕ. (6.7)
mω
This is obviously a form of Eq. (4.28) or Eqs. (5.33) and (5.35). One can obtain a convenient presentation
of S0 in terms of ϕ
ˆ q ˆ ϕ 0 ˆ ϕ
0 0 dq(ϕ ) 0 2 0 0 1
S0 = pdq = p(ϕ ) dϕ = 2I cos ϕ dϕ = I ϕ + sin (2ϕ) . (6.8)
dϕ0 2
Together with the first equation (6.7) this formula gives a parametric presentation of S0 (q) shown in Fig.
5.1.
Equation (6.3) can be written in the form
ˆ ˆ
1
I= dpdq, (6.9)
2π
where the double integral is the area circumscribed by the closed orbit in the phase plane (q, p) . To the
contrary, the trajectory of the system in the phase plane (ϕ, I) is just a straight line. One can say that
transformation to the angle-action variables straightens the trajectory. Of course, the action-angle formalism
is merely a variant of Hamilton-Jacobi method of solving mechanical problems. In the latter, the full action
S is used as the generating function of a canonical transformation rather than the short action S0 here.
Hamilton-Jacobi method is even more radical because after the canonical transformation trajectories reduce
to points.
18
Trajectories of integrable systems corresponding to different initial conditions are straight lines that do
not cross and depend smoothly on the initial conditions. To the contrast, for nonintegrable systems (that
are the majority of mechanical systems) angle-action variables cannot be found and trajectories cannot be
straightened. This usually leads to an apparently irregular behavior known as dynamical chaos.
One apparent difference between integrable and non-integrable systems is that the former have many
integrals of motion Ii , one for each separable degree of freedom i. These integrals of motion, expressed
through the natural variables {qi , pi } , impose limitations on the regions in the phase space accessible to
the system. This makes the motion of the system regular. To the contrary, non-integrable systems do not
have integrals of motion depending on a small subset of dynamical variables. Thus much more phase space
becomes accessible to them.
after which Λ = Λ (ϕ, I, λ) . The Hamilton equations are modified by the term withλ̇:
∂H0 ∂Λ
I˙ = − =− λ̇
∂ϕ ∂ϕ
∂H0 ∂Λ
ϕ̇ = = ω (I, λ) + λ̇, (6.13)
∂I ∂I
where ω (I, λ) = ∂E (I, λ) /∂I. One can see that I and hence E are no longer integrals of motion because of
the time dependence of λ.
Let us consider, as an illustration, a harmonic oscillator with time-dependent frequency, the Hamiltonian
being given by Eq. (4.18) with ω = ω(t). Short action is given by Eq. (5.36), where the dependence on ω
enters via q̃ defined by r
mω
q̃ ≡ q (6.14)
2I
that follows from Eqs. (5.31) and (5.36). Thus for Λ in Eq. (6.11) with the help of Eq. (5.32) one obtains
∂S0 ∂ q̃ p q̃ I I
Λ= = 2I 1 − q̃ 2 = sin ϕ cos ϕ = sin (2ϕ) . (6.15)
∂ q̃ I ∂ω q,I 2ω ω 2ω
19
Now Eqs. (6.13) take the form
ω̇
I˙ = −I cos (2ϕ)
ω
1 ω̇
ϕ̇ = ω + sin (2ϕ) . (6.16)
2 ω
Note that the second of these equations is autonomic.
instead of S0 (q, I, λ(t)) for the canonical transformation. Then one obtains the same equations (6.10) and
(6.13), whereas Λ is now given by
∂S0∗ (q, ϕ, λ)
∂S0 (q, I, λ)
Λ≡ = . (6.18)
∂λ q,ϕ ∂λ q,I
One can see that this Λ is the same as the above. However, S0∗ (q, ϕ, λ) does not grow on average with ϕ,
because the increment 2πI of S0∗ (q, ϕ, λ) over the period of motion is exactly compensated for by the term
−I × 2π in Eq. (6.17). Thus it is obvious that Λ is periodic with zero average.
In the first equation (6.13) the coefficient ∂Λ/∂ϕ is also periodic with zero average. If the change of λ
over the period of oscillations T is small,
λ̇T
1, (6.19)
λ
the change of I becomes very small upon integration on time as oscillations in the rhs average out. Thus I
is the so-called adiabatic invariant of motion. To the contrary, the energy E is not an adiabatic invariant.
For instance, if the frequency ω of a harmonic oscillator is slowly changing, one has
E
I= = const, E ∝ ω. (6.20)
ω
Adiabatic invariants also emerge in the motion of a charged particle in a weakly non-uniform magnetic field
and in quantum mechanics. Bohr-Sommerfeld quasiclassical quantization condition has the form
˛
1
I= pdq = n~, (6.21)
2π
where n is an integer. If parameters of the system change slowly, I practically does not change, and so does
not n. Were I not an adiabatic invariant, it would change continuously with the parameters of the system.
However, n cannot change continuously. Thus we conclude that only adiabatic invariants are suitable to
impose quantization while going from the classical to quantum mechanics.
Change of I becomes especially small if λ slowly changes over the time interval (−∞, ∞) . In this case the
integral ˆ ∞
∂Λ
∆I = − λ̇dt (6.22)
−∞ ∂ϕ
can be usually transformed by shifting the integration contour into the complex plane to suppress oscillations
of the integrand. Then the dominant contribution to ∆I comes from the singularity of the integrand closest
20
Figure 6.1: Time dependence of the change of the action variable I of a harmonic oscillator with slowly
changing frequency. Since I is an adiabatic invariant, ∆I is small at all times. (See parameters for the
numerical calculation in the text.)
to the real axis and ∆I becomes exponentially small. The smaller is λ̇, the greater is the negative exponential
in ∆I.
As a numerical example one can consider a harmonic oscillator with the energy function
m 2
ẋ + ω 2 (t)x2 ,
E= (6.23)
2
ω(t) changing from ω(−∞) to ω(∞) according to
1 + tanh (αt)
ω(t) = ω0 + ∆ω (6.24)
2
with α > 0. The change of ω(t) is slow if α ω0 . Let us set α = ω0 /10. Also we set the oscillator mass m = 2.
For ω0 = ∆ω = 1 one has ω(−∞) = 1 and ω(∞) = 2. For the initial state x(−∞) = 1 and ẋ(−∞) = 0 the
initial oscillator energy is E(−∞) = 1. Inital value of the action variable is I(−∞) = E(−∞)/ω(−∞) = 1.
Numerical integration of the equation of motion ẍ + ω 2 (t)x = 0 yields the results for I(t) shown in Fig. 6.1.
One can see that the change ∆I(t) ≡ I(t) − I(−∞) is small at all times, that is, I is indeed an adiabatic
invariant. Whereas the asymptotic change ∆I(∞) is so small that it cannot be seen in the plot, ∆I(t) at t
around zero, where the change of ω(t) occurs, is noticeable. Although ∆I(∞) is exponentially small, ∆I(t)
for a general t is not.
1 αω cos(ωt)
ϕ̇ = ω0 [1 + α cos(ωt)] − sin(2ϕ) . (6.26)
2 1 + α cos(ωt)
21
As we shall see, parametric resonance occurs if ω is close to 2ω0 . Thus we use
ω = 2ω0 + (6.27)
with a small resonance detuning . Solution of the equation for ϕ can be searched for in the form
1
ϕ(t) = ω0 + t + f (t), (6.28)
2 2
where f (t) is a slow phase. f (t) satisfies the equation
2 sin [(2ω0 + ) t + f (t)] sin [(2ω0 + ) t] = cos f (t) − cos [2 (2ω0 + ) t + f (t)] (6.30)
and dropping the term ∼ α, one obtains the slow equation
dUeff
f˙ = − , Ueff (f ) = f + αω0 sin f, (6.32)
df
a tilted washboard potential. For small resonance detuning || < αω0 , the potential Ueff has local maxima
and minima, so that the phase f (t) relaxes down to a constant value that satisfies
cos f = − (6.33)
αω0
thus s 2
sin f = − 1 − . (6.34)
αω0
This result for sin f corresponds to the minima of Ueff (f ), whereas the solution with the sign (+) in front
of the square root corresponds to the maxima of Ueff (f ). The latter is unstable and it should be discarded.
The result above means that the oscillator locks into the frequency ω/2, as also follows from the solution
in natural variables. In the case || > αω0 the potential Ueff (f ) is monotonic, and the f (t) performs slow
nonlinear motion with a variable rate without stopping anywhere.
To see the parametric instability that develops for || < αω0 , let us now consider the first of the angle-action
equations (6.16). This equation can be written as
d ln I ∼
= 2ω0 α cos [(2ω0 + ) t + f (t)] sin [(2ω0 + ) t] . (6.35)
dt
Reducing the product of trigonometric functions as
2 cos [(2ω0 + ) t + f (t)] sin [(2ω0 + ) t] = − sin f (t) + sin [2 (2ω0 + ) t + f (t)] (6.36)
and dropping the fast oscillating term, one obtains
d ln I ∼
= −ω0 α sin f (t). (6.37)
dt
After f (t) approaches a constant given by Eq. (6.34), this equation becomes
22
d ln I 1p
= 2µ, µ= (αω0 ) 2 − 2 , (6.38)
dt 2
where µ is the parametric resonance exponent. Solution of Eq. (6.38) has the form
I = I0 e2µt , (6.39)
where I0 is the initial value of I. One can see the exponential divergence of I and thus the oscillator’s energy
E in the region of parametric resonance. On the other hand, for large detuning, || > αω0 , exponent µ is
imaginary and I oscillates without growing.
As we have seen, analytical solution using angle-action variables is more elegant than the straightforward
Newtonian solution and it provides more insight. Numerical solution can be done with both formalisms to the
same effect. However, including damping and nonlinearities in the Newtonian formalism is straightforward,
whereas in the angle-action formalism it requires a significant work.
23