0% found this document useful (0 votes)
110 views

Numerical Differentiation: Forward and Backward Differences

This document discusses numerical differentiation techniques, including forward and backward difference formulas approximating derivatives using finite differences. It presents commonly used formulas using equally spaced points, optimal step sizes, and Richardson's extrapolation to improve accuracy of derivative approximations. It also discusses numerical integration using Lagrange interpolating polynomials.

Uploaded by

gicace020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Numerical Differentiation: Forward and Backward Differences

This document discusses numerical differentiation techniques, including forward and backward difference formulas approximating derivatives using finite differences. It presents commonly used formulas using equally spaced points, optimal step sizes, and Richardson's extrapolation to improve accuracy of derivative approximations. It also discusses numerical integration using Lagrange interpolating polynomials.

Uploaded by

gicace020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Numerical Differentiation

Forward and Backward Differences


Inspired by the definition of derivative:
Chapter 4
Numerical Differentiation and Integration f (x0 + h) − f (x0 )
f 0 (x0 ) = lim ,
h→0 h
choose a small h and approximate
Per-Olof Persson
[email protected] f (x0 + h) − f (x0 )
f 0 (x0 ) ≈
h
Department of Mathematics
University of California, Berkeley The error term for the linear Lagrange polynomial gives:

Math 128A Numerical Analysis f (x0 + h) − f (x0 ) h 00


f 0 (x0 ) = − f (ξ)
h 2
Also known as the forward-difference formula if h > 0 and the
backward-difference formula if h < 0

General Derivative Approximations Commonly Used Formulas

Using equally spaced points with h = xj+1 − xj , we have the


three-point formulas
Differentiation of Lagrange Polynomials
1 h2
Differentiate f 0 (x0 ) = [−3f (x0 ) + 4f (x0 + h) − f (x0 + 2h)] + f (3) (ξ0 )
2h 3
n
X 0 1 h2 (3)
(x − x0 ) · · · (x − xn ) (n+1) f (x0 ) = [−f (x0 − h) + f (x0 + h)] − f (ξ1 )
f (x) = f (xk )Lk (x) + f (ξ(x)) 2h 6
(n + 1)! 1 h2
k=0 0
f (x0 ) = [f (x0 − 2h) − 4f (x0 − h) + 3f (x0 )] + f (3) (ξ2 )
2h 3
to get
00 1 h2 (4)
f (x0 ) = 2 [f (x0 − h) − 2f (x0 ) + f (x0 + h)] − f (ξ)
n
X h 12
f (n+1) (ξ(xj )) Y
f 0 (xj ) = f (xk )L0k (xj ) + (xj − xk ) and the five-point formula
(n + 1)!
k=0 k6=j
1
This is the (n + 1)-point formula for approximating f 0 (xj ). f 0 (x0 ) = [f (x0 − 2h) − 8f (x0 − h) + 8f (x0 + h) − f (x0 + 2h)]
12h
h4 (5)
+ f (ξ)
30

Optimal h Richardson’s Extrapolation

Suppose N (h) approximates an unknown M with error


Consider the three-point central difference formula:
M − N (h) = K1 h + K2 h2 + K3 h3 + · · ·
0 1 h2
f (x0 ) = [f (x0 + h) − f (x0 − h)] − f (3) (ξ1 )
2h 6 then an O(hj ) approximation is given for j = 2, 3, . . . by
Suppose that round-off errors ε are introduced when  
h Nj−1 (h/2) − Nj−1 (h)
computing f . Then the approximation error is Nj (h) = Nj−1 +
2 2j−1 − 1

f˜(x0 + h) − f˜(x0 − h) ε h2
0 The results can be written in a table:
f (x0 ) − ≤ + M = e(h)
2h h 6
O(h) O(h2 ) O(h3 ) O(h4 )
where f˜ is the computed function and |f (3) (x)| ≤ M 1: N1 (h) ≡ N (h)
Sum of truncation error h2 M/6 and round-off error ε/h 2: N1 ( h2 ) ≡ N ( h2 ) 3: N2 (h)
p
Minimize e(h) to find the optimal h = 3 3ε/M 4: N1 ( h4 ) ≡ N ( h4 ) 5: N2 ( h2 ) 6: N3 (h)
7: N1 ( h8 ) ≡ N ( h8 ) 8: N2 ( h4 ) 9: N3 ( h2 ) 10: N4 (h)
Richardson’s Extrapolation Numerical Quadrature

Integration of Lagrange Interpolating Polynomials


Select {xP0 , . . . , xn } in [a, b] and integrate the Lagrange polynomial
Pn (x) = ni=0 f (xi )Li (x) and its truncation error term over [a, b]
If some error terms are zero, different and more efficient
to obtain
formulas can be derived
Z b Xn
Example: If
f (x) dx = ai f (xi ) + E(f )
a i=0
M − N (h) = K2 h2 + K4 h4 + · · ·
with
then an O(h2j ) approximation is given for j = 2, 3, . . . by Z b
  ai = Li (x) dx
h Nj−1 (h/2) − Nj−1 (h)
Nj (h) = Nj−1 + a
2 4j−1 − 1
and
Z bY
n
1
E(f ) = (x − xi )f (n+1) (ξ(x)) dx
(n + 1)! a i=0

Trapezoidal and Simpson’s Rules The Newton-Cotes Formulas


The Trapezoidal Rule The Closed Newton-Cotes Formulas
Linear Lagrange polynomial with x0 = a, x1 = b, h = b − a, gives Use nodes xi = x0 + ih, x0 = a, xn = b, h = (b − a)/n:
Z b Z
h h3 b n
X
f (x) dx = [f (x0 ) + f (x1 )] − f 00 (ξ) f (x) dx ≈ ai f (xi )
a 2 12 a i=0
Z xn Z xn Y (x − xj )
Simpson’s Rule ai = Li (x) dx = dx
Second Lagrange polynomial with x0 = a, x2 = b, x1 = a + h, x0 x0 (xi − xj )
j6=i
h = (b − a)/2 gives
n = 1 gives the Trapezoidal rule, n = 2 gives Simpson’s rule.
Z x2
h h5
dx = [f (x0 ) + 4f (x1 ) + f (x2 )] − f (4) (ξ) The Open Newton-Cotes Formulas
x0 3 90
Use nodes xi = x0 + ih, x0 = a + h, xn = b − h,
Definition h = (b − a)/(n + 2). Setting n = 0 gives the Midpoint rule:
The degree of accuracy, or precision, of a quadrature formula is Z x1
the largest positive integer n such that the formula is exact for xk , h3
f (x) dx = 2hf (x0 ) + f 00 (ξ)
for each k = 0, 1, . . . , n. x−1 3

Composite Rules Romberg Integration


Theorem
Let f ∈ C 2 [a, b], h = (b − a)/n, xj = a + jh, µ ∈ (a, b). The
Composite Trapezoidal rule for n subintervals is Compute a sequence of n integrals using the Composite
  Trapezoidal rule, where m1 = 1, m2 = 2, m3 = 4, . . . and
Z b n−1
X
h b − a 2 00 mn = 2n−1 .
f (x) dx = f (a) + 2 f (xj ) + f (b) − h f (µ)
a 2 12 The step sizes are then hk = (b − a)/mk = (b − a)/2k−1
j=1
The Trapezoidal rule becomes
Theorem   
Z b 2k−1
X−1
Let f ∈ C 4 [a, b], n even, h = (b − a)/n, xj = a + jh, µ ∈ (a, b). hk 
f (x) dx = f (a) + f (b) + 2  f (a + ihk )
The Composite Simpson’s rule for n subintervals is a 2
i=1
 
Z b (n/2)−1 n/2 (b − a) 2 00
h X X − hk f (µk )
f (x) dx = f (a) + 2 f (x2j ) + 4 f (x2j−1 ) + f (b) 12
a 3
j=1 j=1
b − a 4 (4)
− h f (µ)
180
Romberg Integration Romberg Integration

Let Rk,1 denote the trapezoidal approximation, then


MATLAB Implementation
h1 (b − a)
R1,1 = [f (a) + f (b)] = [f (a) + f (b)] function R=romberg(f,a,b,n)
2 2
1
R2,1 = [R1,1 + h1 f (a + h2 )] h=b-a;
2 R=zeros(n,n);
1 R(1,1)=h/2*(f(a)+f(b));
R3,1 = {R2,1 + h2 [f (a + h3 ) + f (a + 3h3 )]}
2  for i=2:n
k−2
2X R(i,1)=1/2*(R(i-1,1)+h*sum(f(a+((1:2^(i-2))-0.5)*h)));
1
Rk,1 = Rk−1,1 + hk−1 f (a + (2i − 1)hk ) for j=2:i
2 R(i,j)=R(i,j-1)+(R(i,j-1)-R(i-1,j-1))/(4^(j-1)-1);
i=1
end
Apply Richardson extrapolation to these values: h=h/2;
end
Rk,j−1 − Rk−1,j−1
Rk,j = Rk,j−1 +
4j−1 − 1

Error Estimation Adaptive Quadrature

The error term in Simpson’s rule requires knowledge of f (4) :


Z b
Rb
h5 (4) To compute a f (x) dx within a tolerance ε > 0, first apply
f (x) dx = S(a, b) − f (µ)
a 90 Simpson’s rule with h = (b − a)/2 and with h/2
If
Instead, apply it again with step size h/2:
   

Z b
a+b
  
a+b
  
1 h5 S(a, b) − S a, a + b − S a + b , b < 15ε
f (x) dx = S a, +S ,b − f (4) (µ̃) 2 2
a 2 2 16 90
then the integral is sufficiently accurate
The assumption f (4) (µ) ≈ f (4) (µ̃) gives the error estimate
Z
If not, apply the technique to [a, (a + b)/2] and [(a + b)/2, b],
b    
a+b a+b and compute the integral within a tolerance of ε/2
f (x) dx − S a, −S ,b
a 2 2 Repeat until each portion is within the required tolerance
   
1 a+b a+b
≈ S(a, b) − S a, −S , b
15 2 2

Gaussian Quadrature Legendre Polynomials

Basic idea: Calculate both nodes x1 , . . . , xn and coefficients The Legendre polynomials Pn (x) have the properties
c1 , . . . , cn such that 1 For each n, Pn (x) is a monic polynomial of degree n (leading
Z coefficient 1)
b n
X R1
2 P (x)Pn (x) dx = 0 when P (x) is a polynomial of degree
f (x) dx ≈ ci f (xi ) −1
a
less than n
i=1
The roots of Pn (x) are distinct, in the interval (−1, 1), and
Since there are 2n parameters, we might expect a degree of symmetric with respect to the origin.
precision of 2n − 1 Examples:
Example: n = 2 gives the rule
Z √ ! √ ! P0 (x) = 1, P1 (x) = x
1
− 3 3 1 3
f (x) dx ≈ f +f 2
P2 (x) = x − P3 (x) = x3 − x
−1 3 3 3 5
6 3
with degree of precision 3 P4 (x) = x4 − x2 +
7 35
Gaussian Quadrature Computing Gaussian Quadrature Coefficients

MATLAB Implementation
function [x,c]=gaussquad(n)
Theorem %GAUSSQUAD Gaussian quadrature
Suppose x1 , . . . , xn are roots of Pn (x) and
P=zeros(n+1,n+1);
Z
n
Y 1 P([1,2],1)=1;
x − xj
ci = dx for k=1:n-1
−1 x i − xj P(k+2,1:k+2)=((2*k+1)*[P(k+1,1:k+1) 0]- ...
j6=i
k*[0 0 P(k,1:k)])/(k+1);
If P (x) is any polynomial of degree less than 2n, then end
x=sort(roots(P(n+1,1:n+1)));
Z 1 n
X
P (x) dx = ci P (xi ) A=zeros(n,n);
−1 i=1 for i=1:n
A(i,:)=polyval(P(i,1:i),x)’;
end
c=A\[2;zeros(n-1,1)];

Arbitrary Intervals Double Integrals

Consider the double integral


Rb ZZ
Transform integrals a f (x) dx into integrals over [−1, 1] by a
change of variables: f (x, y) dA, R = {(x, y) | a ≤ x ≤ b, c ≤ y ≤ d}
R
2x − a − b 1 Partition [a, b] and [c, d] into even number of subintervals n, m
t= ⇔ x = [(b − a)t + a + b]
b−a 2
Step sizes h = (b − a)/n and k = (d − c)/m
Gaussian quadrature then gives Write the integral as an iterated integral
Z b Z 1   ZZ Z b Z d 
(b − a)t + (b + a) (b − a)
f (x) dx = f dt f (x, y) dA = f (x, y) dy dx
a −1 2 2 R a c

and use any quadrature rule in an iterated manner.

Composite Simpson’s Rule Double Integration Non-Rectangular Regions

The Composite Simpson’s rule gives


Z b Z  n m
d
hk X X
f (x, y) dy dx = wi,j f (xi , yj ) + E
a c 9
i=0 j=0 The same technique can be applied to double integrals of the form
where xi = a + ih, yj = c + jk, wi,j are the products of the nested Z bZ d(x)
Composite Simpson’s rule coefficients (see below), and the error is f (x, y) dy dx
a c(x)
 
(d − c)(b − a) 4 ∂ 4 f 4
4∂ f
E=− h (η̄, µ̄) + k (η̂, µ̂) The step size for x is still h = (b − a)/n, but for y it varies with x:
180 ∂x4 ∂y 4
d(x) − c(x)
d 1 4 2 4 1 k(x) =
m

4 16 8 16 4

c 1 4 2 4 1
a b
Gaussian Double Integration Improper Integrals with a Singularity

The improper integral below, with a singularity at the left


endpoint, converges if and only if 0 < p < 1 and then
Z b
For Guassian integration, first transform the roots rn,j from
b
1 (x − a)1−p (b − a)1−p
p
dx = =
[−1, 1] to [a, b] and [c, d], respectively a (x − a) 1−p a 1−p
The integral is then More generally, if
Z bZ n n
d
(b − a)(d − c) X X g(x)
f (x, y) dy dx ≈ cn,i cn,j f (xi , yj ) f (x) = , 0 < p < 1, g continuous on [a, b],
a c 4 (x − a)p
i=1 j=1

Similar techniques can be used for non-rectangular regions construct the fourth Taylor polynomial P4 (x) for g about a:

g 00 (a)
P4 (x) = g(a) + g 0 (a)(x − a) + (x − a)2
2!
g 000 (a) g (4) (a)
+ (x − a)3 + (x − a)4
3! 4!

Improper Integrals with a Singularity Improper Integrals with a Singularity

and write
Z b Z b Z b For the first integral, use the Composite Simpson’s rule to
g(x) − P4 (x) P4 (x)
f (x) dx = dx + dx compute the integral of G on [a, b], where
a a (x − a)p a (x − a)p
( g(x)−P (x)
4
The second integral can be computed exactly: (x−a)p , if a < x ≤ b
g(x) =
Z 4 Z 0, if x = a
b
P4 (x) X b
g (k) (a)
dx = (x − a)k−p dx
a (x − a)p k! (k)
Note that 0 < p < 1 and P4 (a) agrees with g (k) (a) for each
k=0 a
X4 k = 0, 1, 2, 3, 4, so G ∈ C 4 [a, b] and Simpson’s rule can be applied.
g (k) (a)
= (b − a)k+1−p
k!(k + 1 − p)
k=0

Singularity at the Right Endpoint Infinite Limits of Integration

R∞
For an improper integral with a singularity at the right An integral of the form a x1p dx, with p > 1, can be converted to
endpoint b, make the substitution z = −x, dz = −dx to an integral with left endpoint singularity at 0 by the substitution
obtain
t = x−1 , dt = −x−2 dx, so dx = −x2 dt = −t−2 dt
Z b Z −a
f (x) dx = f (−z) dz which gives
a −b
Z ∞ Z 0 Z 1/a
which has its singularity at the left endpoint 1 tp 1
dx = − dt = dt
For an improper integral with a singularity at c, where a xp 1/a t2 0 t2−p
a < c < b, split into two improper integrals R∞
More generally, this variable change converts a f (x) dx into
Z b Z c Z b
Z ∞ Z 1/a  
f (x) dx = f (x) dx + f (x) dx 1
a a c f (x) dx = t−2 f dt
a 0 t

You might also like