Zyypretty
Zyypretty
NUMERICAL DIFFERENTIATON
Numerical differentiation involves the computation of a derivative of a function f from given
values of f. Such formulas are basic to the numerical solution of differential equations. The
analytical differentiation of a function is a relatively easy and feasible task unlike the analytical
integration which in most cases is not feasible. The numerical differentiation as well as
integration on the other hand are easy and always feasible. However, sometimes analytical
differentiation is undesirable since the derivative (e.g., derivatives with terms containing product
of transcendental functions) might inflate physically as well as computationally and
consequently would involve more computing resources and errors. In such cases, numerical
differentiation has a scope in scientific computing.
A. Finite Difference Methods – Forward Backward Central Difference formulae
A finite difference is a mathematical expression of the form f (x + b) − f (x + a). Finite difference
is often used as an approximation of the derivative, typically in numerical differentiation.
The difference operator, commonly denoted ∆ is the operator that maps a function f to the
function defined by
A difference equation is a functional equation that involves the finite difference operator in the
same way as a differential equation involves derivatives. There are many similarities between
difference equations and differential equations, especially in the solving methods. Certain
recurrence relations can be written as difference equations by replacing iteration notation with
finite differences.
In numerical analysis, finite differences are widely used for approximating derivatives, and the
term "finite difference" is often used as an abbreviation of "finite difference approximation of
derivatives".
Forward Difference
A forward difference, denoted of
a function f is a function defined as
Backward Difference
A backward difference uses the function values at x and x − h, instead of the values
at x + h and x:
Central difference
The central difference is given by
Example:
B. Derivatives for Noisy Data
Computing derivatives for noisy data is a common step in the physical, biological, and
engineering sciences. However, the mathematical formulation of numerical differentiation is
often ill-posed, which can make it difficult to choose a computational method and its
parameters.
Example:
II. NUMERICAL INTEGRATION
The antiderivatives of many functions either cannot be expressed or cannot be expressed easily
in closed form (that is, in terms of known functions). Consequently, rather than evaluate definite
integrals of these functions directly, we resort to various techniques of numerical integration to
approximate their values.
A. Euler
In mathematics and computational science, the Euler
method (also called the forward Euler method) is a first-
order numerical procedure for solving ordinary differential
equations (ODEs) with a given initial value. It is the most
basic explicit method for numerical integration of ordinary
differential equations and is the simplest Runge–Kutta
method.
The Euler method is a first-order method, which means that
the local error (error per step) is proportional to the square of
the step size, and the global error (error at a given time) is
proportional to the step size. The Euler method is a first-order method, which means that the
local error (error per step) is proportional to the square of the step size, and the global error (error
at a given time) is proportional to the step size.
The illustration on the right shows the Euler Method. The unknown curve is in blue, and its
polygonal approximation is in red.
Example:
Solution:
B. Trapezoidal
In Calculus, “Trapezoidal Rule” is one of the important integration rules. The name trapezoidal
is because when the area under the curve is evaluated, then the total area is divided into small
trapezoids instead of rectangles. This rule is used for approximating the definite integrals where
it uses the linear approximations of the functions.
The trapezoidal rule is mostly used in the numerical analysis process. To evaluate the definite
integrals, we can also use Riemann Sums, where we use small rectangles to evaluate the area
under the curve.
Then the Trapezoidal Rule formula for area approximating the definite integral ∫ ab f(x)dx is given
by:
Where, xi = a+iΔx
If n →∞, R.H.S of the expression approaches the definite integral ∫ab f(x)dx.
Examples:
1. Approximate the area under the curve y = f(x) between x =0 and x=8 using Trapezoidal
Rule with n = 4 subintervals. A function f(x) is given in the table of values.
Solution:
A≈ T4 = 3 + 14 + 22+ 18+3 = 60
Therefore, the approximate value of area under the curve using Trapezoidal Rule is 60.
2. Approximate the area under the curve y = f(x) between x =-4 and x= 2 using Trapezoidal
Rule with n = 6 subintervals. A function f(x) is given in the table of values.
Solution:
The Trapezoidal Rule formula for n= 6 subintervals is given as:
T6 =(Δx/2)[f(x0)+ 2f(x1)+ 2f(x2)+2f(x3) + 2f(x4)+2f(x5)+ f(x6)]
Therefore, the approximate value of area under the curve using Trapezoidal Rule is 34.
C. Simpson
Simpson’s rule is one of the numerical methods which is used to evaluate the definite integral.
Usually, to find the definite integral, we use the fundamental theorem of calculus, where we have
to apply the antiderivative techniques of integration. However, sometimes, it isn’t easy to find
the antiderivative of an integral, like in Scientific Experiments, where the function has to be
determined from the observed readings. Therefore, numerical methods are used to approximate
the integral in such conditions. Other numerical methods used are trapezoidal rule, midpoint rule,
left or right approximation using Riemann sums.
Where h = (b – a)/2
This is the Simpson’s ⅓ rule for integration.
D. Gauss Quadrature
In numerical analysis, an n-point Gaussian
quadrature rule, named after Carl Friedrich
Gauss, is a quadrature rule constructed to yield an
exact result for polynomials of degree 2n − 1 or
less by a suitable choice of the nodes xi and
weights wi for i = 1, ..., n.
The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in
1826. The most common domain of integration for such a rule is taken as [−1, 1], so the rule is
stated as
which is exact for polynomials of degree 2n − 1 or less. This exact rule is known as the Gauss–
Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the
integral above if f (x) is well-approximated by a polynomial of degree 2n − 1 or less on [−1, 1].
The Gauss–Legendre quadrature rule is not typically used for integrable functions with
endpoint singularities. Instead, if the integrand can be written as
where g(x) is well-approximated by a low-degree polynomial, then alternative nodes xi' and
weights wi' will usually give more accurate quadrature rules. These are known as Gauss–Jacobi
quadrature rules, i.e.,
Example:
Solution:
III. SOLUTION TO ORDINARY DIFFERENTIAL EQUATION: INITIAL VALUE
PROBLEMS
The explicit Euler method is one of the earliest techniques for solving ordinary differential
equations. It's considered the simplest and most intuitive method for solving initial value
problems.
The explicit Euler method is a step-by-step method, which means it evaluates the next point on
the curve in short steps until it reaches sufficient accuracy.
Example:
The Canadian population at t=0 (current year) is 35 million. If 2.5% of the population have a
child in a given year, while the death rate is 1% of the population, what will the population be in
50 years? What if 6% of the population have a child in a given year and the death rate is kept
constant at 1%, what will the population be in 50 years?
Solution:
B. Modified Euler’s Method – Midpoint Method
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step
method for numerically solving the differential equation,
for n = 0,1,2,3…. Here, h is the step size — a small positive number, and is
the computed approximate value of The explicit midpoint method is sometimes also
known as the modified Euler method, the implicit method is the most simple collocation
method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified
Euler method can refer to Heun's method, for further clarity see List of Runge–Kutta methods.
Example:
C. Runge-Kutta Methods (2nd, 3rd and 4th Order Methods)
Example:
Solution:
D. Modified Euler’s Predictor – Corrector Method
Predictor-Corrector Method :
The predictor-corrector method is also known as Modified-Euler method.
In the Euler method, the tangent is drawn at a point and slope is calculated for a given step
size. Thus this method works best with linear functions, but for other cases, there remains a
truncation error. To solve this problem, the Modified Euler method is introduced. In this
method instead of a point, the arithmetic average of the slope over an interval is
used.
Thus in the Predictor-Corrector method for each step the predicted value of is calculated
first using Euler’s method and then the slopes at the points and is
calculated and the arithmetic average of these slopes are added to to calculate the corrected
value of .
So,
is corrected :
Example:
IV. SOLUTION TO ORDINARY DIFFERENTIAL EQUATION: BOUNDARY -
VALUE PROBLEMS
A. Shooting Method
The Shooting Method is a numerical technique used to solve boundary value problems
(BVPs) for ordinary differential equations (ODEs). It converts a BVP into an initial value
problem (IVP), which can then be solved using standard numerical methods like Runge-Kutta.
Example:
Solution:
B. Finite Difference
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for
solving differential equations by approximating derivatives with finite differences. Both the
spatial domain and time domain (if applicable) are discretized, or broken into a finite number of
intervals, and the values of the solution at the end points of the intervals are approximated by
solving algebraic equations containing finite differences and values from nearby points.
Finite difference methods convert ordinary differential equations (ODE) or partial differential
equations (PDE), which may be nonlinear, into a system of linear equations that can be solved
by matrix algebra techniques. Modern computers can perform these linear algebra computations
efficiently, and this, along with their relative ease of implementation, has led to the widespread
use of FDM in modern numerical analysis. Today, FDMs are one of the most common
approaches to the numerical solution of PDE, along with finite element methods.
Example:
Solution: