100% found this document useful (1 vote)
152 views

Variational Calculus

The document provides an introduction to variational calculus and derives the Euler-Lagrange equation. [1] Variational calculus aims to find the function that minimizes an integral rather than directly minimizing a function. [2] The Euler-Lagrange equation gives the condition that a function must satisfy to be a stationary point of the functional. [3] The derivation uses a Taylor approximation and integration by parts to show that the Euler-Lagrange equation must hold for the function to be an extremum of the functional.

Uploaded by

absiddou
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
152 views

Variational Calculus

The document provides an introduction to variational calculus and derives the Euler-Lagrange equation. [1] Variational calculus aims to find the function that minimizes an integral rather than directly minimizing a function. [2] The Euler-Lagrange equation gives the condition that a function must satisfy to be a stationary point of the functional. [3] The derivation uses a Taylor approximation and integration by parts to show that the Euler-Lagrange equation must hold for the function to be an extremum of the functional.

Uploaded by

absiddou
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

A gentle introduction to variational calculus

Patrick Pletscher

June 11, 2007

In traditional calculus we usually consider functions f : X → Y, maximization and


minimization (here for minimization) refer to finding the point xm ∈ X , s.t. ∀x ∈ X :
f (xm ) < f (x). In variational calculus we however don’t want to find a minimum of a
function, but rather a function that minimizes an integral. This is standard in physics
and mathematics, but has also numerous important applications in computer science,
for example in computer graphics, numerics or machine learning and computer vision.

1 Euler-Lagrange equation
A functional can be thought of as a special “function” mapping from the space of
functions to R. Here we’ll consider a functional J, which has the form
Z b
J(f ) = F (x, f, fx ) dx (1)
a

we want to find the function f (x) that minimizes J. We’ll assume that F (· ) is twice
continuously differentiable, i.e. the first and second derivative of F are continuous. We’ll
show that the following Euler-Lagrange equation has to hold for f for being a stationary
point of the functional in (1):
 
∂ d ∂
F (x, f, fx ) − F (x, f, fx ) = 0
∂f dx ∂fx

this is usually abbreviated as follows:


d
Ff − Ff = 0. (2)
dx x
Before proving this, we’ll explicitly write down the equation in (2). For this we first
write out the total derivative:
d ∂ ∂ ∂
Ff = Ff (x, f, fx ) + Ff (x, f, fx )fx + Ff fxx
dx x ∂x x ∂f x ∂fx x
= Ffx ,x + Ffx ,f fx + Ffx ,fx fxx .

1
And thus the equation in (2) in explicit form becomes:
Ff − Ffx ,x − Ffx ,f fx − Ffx ,fx fxx = 0.
We now prove the Euler-Lagrange equation.

Proof. We add a test function η(x) with η(a) = η(b) = 0 and a scaled amplitude  to
the function f (x):
f (x) ← f (x) + η(x).
We’re thus now considering the varied functional
Z b
d
J(f ) = F (x, f + η, (f + η)) dx,
a dx
a necessary condition of extremality is given by

dJ
∀η : = 0,
d =0
i.e. there does not exist a better function value in the “neighborhood” of f , this is very
similar to the standard maximization/minimization problems. We now perform a Taylor
approximation of the integrand in :
∂ ∂
F (x, f + η, fx + ηx ) = F (x, f, fx ) + η(x) F (x, f, fx ) + ηx (x) F (x, f, fx ) + O(2 ).
∂f ∂fx
Because of the extremality condition above, the linear terms have to vanish:
d b
Z
∂ ∂
F (x, f, fx ) + η(x) F (x, f, fx ) + ηx (x) F (x, f, fx ) dx = 0
d a ∂f ∂fx
Z b
d d ∂ d ∂
F (x, f, fx ) + η(x) F (x, f, fx ) + ηx (x) F (x, f, fx ) dx = 0
a d d ∂f d ∂fx
Z b
∂ ∂
0 + η(x) F (x, f, fx ) + ηx (x) F (x, f, fx ) dx = 0.
a ∂f ∂fx

We now perform partial integration, i.e.


Z b  b Z b
dv du
u(x) (x) dx = u(x)v(x) − (x)v(x) dx,
a dx a a dx

for the integrand ηx (x) ∂f∂x F (x, f, fx ):


Z b  b
∂ ∂
ηx (x) F (x, f, fx ) dx = η(x) F (x, f, fx )
a ∂fx ∂fx a
| {z }
=0, since η(a)=η(b)=0
Z b  
d ∂
− η(x) F (x, f, fx ) dx
a dx ∂fx

2
We combine the two terms again to get:
Z b
∂ ∂
η(x) F (x, f, fx ) + ηx (x) F (x, f, fx ) dx =
a ∂f ∂fx
Z b  
∂ d ∂
η(x) F (x, f, fx ) + F (x, f, fx ) dx = 0.
a ∂f dx ∂fx

Since the equation has to hold for all test functions η(x) the term in the big braces has
to be equal to zero, which proves our claim.

You might also like