0% found this document useful (0 votes)
87 views

Chapter - 1 A Brief Discussion On Differential Equation and Collocation Method

This document discusses differential equations and the collocation method for solving them numerically. It begins with an introduction to differential equations and examples of how they are used to model real-world problems. It then describes ordinary and partial differential equations. The rest of the document discusses various numerical methods for approximating solutions to differential equations, including the method of weighted residuals (MWR) and collocation method. It provides an overview of MWR and how it works by selecting trial functions, determining coefficients, and minimizing residuals.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views

Chapter - 1 A Brief Discussion On Differential Equation and Collocation Method

This document discusses differential equations and the collocation method for solving them numerically. It begins with an introduction to differential equations and examples of how they are used to model real-world problems. It then describes ordinary and partial differential equations. The rest of the document discusses various numerical methods for approximating solutions to differential equations, including the method of weighted residuals (MWR) and collocation method. It provides an overview of MWR and how it works by selecting trial functions, determining coefficients, and minimizing residuals.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CHAPTER - 1

A Brief Discussion On Differential Equation And


Collocation Method

1.1 Introduction 1
1.2 DifTerential Ek[uation
1.3 Brief Review of MWR
1.4 S e l e c t i o n of Weighting F u n c t i o n
1.5 S o l u t i o n of Differential Equation And Numerical Method
1.6 Finite Difference M e t h o d
1.7 Classification of Partial Differential Equation
1.8 S c o p e of Present Work
1.1 INTRODUCTION :
A differential equation is a mathematical equation for
an unknown function of one or several variables. That
relates the values of the function itself and its derivatives of
various orders. Differential equations play a prominent role
in engineering, science and technology, economics and
other disciplines. Whenever a deterministic relationship
involving some continuously varying quantities is known or
postulated. This is illustrated in classical mechanics, where
the motion of a body is described by its position and velocity
as the time varies. Newton's Laws allow one to relate the
position, velocity, acceleration and various forces acting on
the body and state this relation as a differential equation for
the unknown position of the body as a function of the time.
In some case, this differential equation may be solved
explicitly.
As an example of modeling a real world problem using
differential equations is determination of the velocity of a
ball falling through the air, considering only gravity and air
resistance. The ball's acceleration towards the ground is the
acceleration due to air resistance. Gravity is constant but
air resistance may be modelled as proportional to the ball's
velocity. This m.eans acceleration of ball is the derivative of
its velocity, depends on the velocity. Finding the velocity as
a function of time requires solving a differential equation.
Differential equations are mathematically studied from
several different perspectives, m.ost concerned with their
solutions, the set of functions that satisfy the equation.
Only the simplest differential equations admit solutions
given by explicit formulas, however some properties of
solutions of a given differential equation may be determined
without finding their exact form. If a self-contained formula
for the solution is not available, the solution may be
numerically approximated using computers. The theory of
dynamical systems puts emphasis on qualitative analysis
described by differential equations, while many numerical
methods have been developed to determine solution with a
given degree of accuracy.

1.2 DIFFERENTIAL EQUATION :


The differential equations are divided into two classes
depending on the number of independent variables present
in the differential equation. If the differential equation
contains only one dependent variable then it is called
ordinary differential equation and if it h a s more than one
independent variable then it is called partial differential
equation.
In general non linear differential equation of order m
can be written as
F(t,y,y',y", ,y<"-^^ = 0
or
/ " \ t ) = (t,y,y',y", ,y^-^))

The general solution of m* order ordinary differential


equation contains m independent arbitrary constants. In
order to determine the arbitrary constant in the general
solution we need some conditions known as initial and
b o u n d a r y conditions. If m conditions are prescribed at one
point, t h e s e are called initial conditions. The differential
equation to gather with t h e initial condition are called initial
value problem. T h u s t h e m^^ order initial value problem c a n
be expressed a s
/->(t) =f(t,y,y',y'', ,/'""'>)
y^^(to)=yo^'^ ; p = 0,l,2,....,m-l
If m conditions are prescribed at more t h a n one point,
these are called boundary conditions. The differential
equation together with b o u n d a r y conditions is k n o w n a s
b o u n d a r y value problem.
Here in o u r discussion, we d i s c u s s generally the
partial differential equation (PDE) a n d m e t h o d for finding
t h e solution of PDE.
The class of differential equation is too wide and
obtaining their solution is a n important t a s k , in the s t u d y of
s u c h differential e q u a t i o n s . There are m a n y m e t h o d s to find
the solutions a n d they are categorized into three wide
groups.
(1) Analytical m e t h o d s
(2) Numerical m e t h o d s
(3) Approximation m e t h o d s
As a m a t t e r of fact, very few differential e q u a t i o n s have
closed form solutions a n d even if s u c h solutions exist except
for a very simple case, lots of m a t h e m a t i c a l exercise is
required to obtain the solution.
In the case where an analytical solution of a
differential equation is not possible or r a t h e r too difficult to
obtain, one looks for an approximation to it as a
compromise between its accuracy and labour involved in
getting a closed form solution. This can be done by using
various approximate methods. These methods are divided
into three major classes:
(i) Asymptotic
(ii) Iterative and
(iii) Methods of weighted residuals.
Asymptotic methods are applied to those problems in
which physical parameters or variables are very small or
large or in close proximity to some characteristic value.
Thus asymptotic methods have at their foundation, a desire
to develop solutions which are approximately valid for the
above mentioned physical variables. The well-known regulair
or singular perturbation techniques are typical asymptotic
methods.
Iterative methods include the development of series
methods of successive approximations, rational
approximations etc. These methods often form repetitive
calculations through some operations whose character
isU^+i =F(Ui^,Ui(..i,...., U3,U2,Uj). These operations result in
successive improvements of the approximation and converge
to some function 'U'. For instance, transformation of the
differential equation to an integral equation leads to a
natural iterative process.
The third type of approximate methods is known as
method of weighted residuals (often abbreviated and
referred to as MWR for convenience). These MWR were
originated in the calculus of variations a n d they require t h a t
t h e approximate solution should be close to the exact
solution in the sense that the residual is somehow
minimized. The m e t h o d of m o m e n t s , Galerkin's method,
collocation miethods a n d s u b d o m a i n m e t h o d s are typical of
MWR.

1.3 BRIEF REVIEW OF MWR :


Rayleigh a n d Ritz developed a powerful direct m e t h o d
in t h e calculus of variations. Later on in 1915, Galerkin
developed a first true weighted residual method.
Kantoravich, Krelov, Grandall a n d GoUatz contributed to the
development of t h e s e m e t h o d s which are widely u s e d in
linear problems. A brief review of t h e s e methods is
p r e s e n t e d below:
Let ^x' be the vector of independent variables
X,, X2,...., Xp in t h e d o m a i n 'D'. Also consider:

L(U) = f(x) ...(1.3.1)


B.(U) = g,(x), i = l,2,3,...,p ...(1.3.2)
where L is a differential operator a n d Bj s t a n d s for the
n u m b e r of b o u n d a r y conditions. The functions f a n d g, are
t h e functions of t h e coordinates involved in t h e problem.
Consider t h a t u(x) is t h e approximate solution to t h e
e q u a t i o n (1.2.1) where

u(x) = 2 ] C / j + <^o ...(1.3.3)

6
and {^j}, j =1,2,3,...,n is a set of trial functions selected
before hand. These (Z^j's are chosen in such a way that they
satisfy the boundary conditions. This requirement can be
modified according to the nature of problem. The constants
C ,j = l,2,3,...,n are undetermined parameters and they can
be evaluated in many ways. In MWR, these parameters are
so chosen that the weighted averages of residuals vanish at
specified points. In other criteria this is done so as to give a
stationary value to a functional related to the given problem.
This functional is usually obtained through the calculus of
variations. In both the cases, for undetermined values ofCj,
a set of simultaneous algebraic equations in Cj, is arrived
for j = 1, 2, 3, ..., n. These results in a set of simultgmeous
differential equations inCjare in the case of undetermined
functional.
The trial solutions, in MWR, are selected in a manner
that they satisfy all boundary conditions in both equilibrium
and initial value problems. This can be accomplished in a
number of modes. One of them is suggested to select ^^ so
that the relations
^iih) = gi i = l,2,3, ,p 1 ( 1 3 4)
BM) = 0 i = l,2,3, ,p, j ^0 J

are fulfilled. It is clear then that u(x) by equation (1.3.2)


satisfies all boundary conditions. However, in the case of
initial value problem, the initial conditions often cannot be
got through and a separate initial residual is established.

7
For t h e stationary functional m e t h o d , it is only required t h a t
u(x) m a t c h e s essential b o u n d a r y conditions.
By s u b s t i t u t i o n of t h e trial solution (1.3.3) into t h e
given e q u a t i o n (1.2.1), a n equation of r e s i d u a l is p r o d u c e d
having t h e form
R(C, (l>) = f(x) - L[u(x)]

= f(x) - L[^o(x) - i Cj<2^j(x)] -(l-S-S)


j=i

If u(x)is a n exact solution, t h e n 'R' is identically equal


to zero within the restricted trial family. Hence a good
approximation is described as one, for which 'R' is
sufficiently small. A r e q u i r e m e n t t h a t the weighted averages
'R' with respect to t h e weighted function, denoted by
JW|j R dD should v a n i s h brings out linear or non-linear
D

algebraic e q u a t i o n s forCj, j = 1,2,3,...,n.

1.4 SELECTION OF WEIGHTING FUNCTION:


(A) M e t h o d Of M o m e n t s :
The m e t h o d of m o m e n t s suggests t h a t the first 'n'
moments of residual 'R' are to be v a n i s h e d at some
p r e d e t e r m i n e d points. Mathematically if W^ = P^Cx) t h e n

Jp,(x)RdD = 0, k = l,2,3,...,n ...(1.4.1)


D

where P^Cx) are orthogonal polynomials over t h e domain D.


This procedure explains the theory of orthogonal
polynomials very nicely. In one dimension, often W^, = x*" is
u s e d b u t t h e s e are not orthogonal for 0 < x < 1 a n d better
r e s u l t s would be obtained if they were orthogonalized before
their u s e . The u s e of x'' a s weight-functions justifies t h e
n a m e of this m e t h o d .

(B) Collocation M e t h o d :
The weight function to be selected in t h e m e t h o d of
collocation is a special function known a s dirac delta
f u n c t i o n s . If Pj, i = 1, 2, 3 , ..., n are n-points in the domain
D a n d Wj^ = d(?-?^) is t h e weight function, t h e n due-to the
n a t u r e of dirac delta function, W,^vanishes every-where
except at P = Pk • These yields

/ a ( P - P k ) R d D = R(Pk) ...(1.4.2)
D

This m e a n s t h a t t h e residual v a n i s h e s at n points in D


a n d t h e s e points are said to be collocation points. The
choice of collocation p o i n t s is m a d e at r a n d o m a n d a s per
convenience of the u s e r . This type of arbitrary location of
collocation points is replaced by the roots of orthogonal
polynomials in the method of orthogonal collocation.
Besides, in the latter method, the trial functions eire
c o n s t r u c t e d with the help of orthogonal polynomials.

(C) M e t h o d Of Subdomain :
The entire domain D of the problem is divided into
several partitions, not necessarily disjoints. Say
Di,D2,D3,...,Djj are such divisions of D and consider
Wk ( D J = I, Wk (Dj) = 0, j ^ n so t h a t
JRdD = 0, k-0,l,2,...,n ...(1.4.3)

Thus this method insists that the weighted average


should vanish in the divisions of the whole domain.

(D) The Method Of Least Squares :


In the method of least squares, the weight function is
assumed to be a partial derivative of the residual with
respect to the undetermined parametersCj's, j = 1, 2,3, ...,n.
5R
i.e. W^ = , j = l,2,3,..,n so that

-^ fR^dD = 2 f—RdD = 0 ...(1.4.4)


VK.J p D'^^J

This shows that the integral of squared residual is


minimized with respect to the undetermined parameters. As
a result of this relation, one gets 'n' simultaneous algebraic
equations inCj.
The trial solution should be so constructed that
maximum information can be extracted with minimum
efforts and time. Thus the selection of trial functions is
really a very crucial step through it is very difficult as well.
While choosing these, one m u s t be sure that trial family
contains good approximations. A better trial solution can be
applied if the behavior of the solution is known to the user
to a certsiin degree. In physical problems a study of the
expected physical nature of solution is very significant.
Instead of attempting the approximation throughout the
domain of the problem, if it is often recommended to break

10
it up into subdomains and treat it as separate portions of
study.
Since these procedures are approximate, an important
question arises regarding accuracy of the approximation. In
many cases convergence theorems exist to the effect that if
iterations are carried on indefinitely or if the size of the
interval is reduced without any limit, then the process will
yield approximations converging to the true solution, while
these theorems are not without their stimulation value to
the confidence of the analyst. They are not as much of
practical value, if a realistic error bound applicable to the
problem, does not exist at any or each stage of the
computation. When the error bounds are not available, the
method is applied to a problem having exact solution in
order to verify the utility of the method. Error is then
calculated and a presumption is made that the method
produces error of same magnitude for similar problems.
Though this is not an ideal practice, one h a s to do so on
certain occasions.
Certain error distribution principles should be utilized
when the undetermined parameters aire to be calculated in
all of these techniques. An error is distributed in
approximate solution over a domain D. Error should be
orthogonal to a chosen set of linearly independent weighting
functions in methods like Galerkin's and in the method of
moments. The error distribution principles are
advantageous as they work directly with differential
equations instead of equivalent variational problem.
1.5 SOLUTION OP DIFFERENTIAL EQUATION AND
NUMERICAL METHOD :
In modern practice a majority of unsolved problem in
life science, physical science etc. usuadly governed by non
linear differential equations, can only be treated by
numerical approach. As a consequence specialist to various
fields h a s devoted increasing attention to numerical as
opposed to anadytical techniques. In the early days of
research in numerical analysis because of restricted
capacity of computing machines, the applications of
numerical methods where possible to a limited set of
problem. Today the situation is different and the computing
devices available now are sufficiently advanced and
developed to deal with almost an unlimited range of problem
what is really needed is merely the right choice of the
effective numerical methods for solving them. Thus rapidly
advancing computers have greatly extended the reality of
the computational work making it possible in many
instances to reject approximate interpretations of applied
problem and pass on to the solution of precisely stated
problem. No doubt, this involved utilization of a dipper
knowledge and understanding of specialized branches of
mathematics. Also the proper aid of modern computer is
rather impossible to get without the skilled use of
approximation methods and numerical analysis as well. All
these need the universally enhanced and inherent in the
methods of computational mathematics.

12
Various types of applications of differential equations
to nature science have stretched the limit of the field of
differential equation. It is rather difficult to say that every
differential equation can be treated numerically by the same
method but it can be modify by a statement that more than
one techniques can be attempted to handle the same
problem and vice versa. The study of various differential
equations demand and exclusive range of method for their
solutions and several numerical methods are found to be
used very frequently, some of these are method of finite
differences, Miline's method, perturbation, predictor-
corrector, single step, multi step, Runge kutta, Taylor's
expansion and many other methods. It may be added that
for some of these methods ready programmed packages are
also available for use with modern computers.
A numerical solution to a differential equation differs
in many ways from an analytical solution. The later provides
the value of dependent variable corresponding to any value
of an independent variable. In contrast to this in numerical
solution in interval of interested is divided into
predetermined numbers of increments may or may not be of
equal length. When the equation is to be solved numerically
the initial conditions are necessary to start with at each
incremental step. When this solution is completed results
are presented through graph and tables.
For a sufficiently smadl step size a numerical solution
closely approximate the true once hair grays prematurely, a
numerical solution may become unstable. This means that

13
as the solution progresses from one step to next the
numerical result may begin to Oscillate in an uncontrolled
manner. This referred to as numerical instability. If step size
is change then the stability characteristic is varied. The
objective is to select an approximate step size so that the
numerical solution is reasonably accurate and no instability
results or at least the stability is established u p to some
acceptable limit. Traditionally various choice is in one step
size are made and the respective numerical result are
compared. The best solution is then selected. However the
larger step size the shorter is the computer time but is
should not be so small to give excessive error. Also it is
desirable that the step size should be small enough to get
accurate results.
Two common questions are encountered while the
numerical solution to the problem is obtained. The first is
about its acceptance whether it is sufficiently close to the
true solution or not. If one has an analytic solution then
this can be answered very clearly but in either case it is not
so easy. One has to be careful while concluding that a
particular numerical solution is acceptable when an
analytic solution is not available. Normally a method is
selected which requires a minimum numiber of steps,
consuming the shortest computational time and yet one
that does not produce an excessive errors.
1.6 FINITE DIFFERENCE METHOD :
The finite difference method (FDM) for the solution of a
two point boundary value problem consists in replacing the

14
derivatives occurring in t h e differential e q u a t i o n s . By m e a n s
of their finite difference approximation a n d t h e n solving the
resulting linear system of e q u a t i o n s u s i n g a standard
procedure.
Here we u s e Taylor's series m e t h o d to obtain the
appropriate finite difference approximation to the
derivatives.
E x p a n d i n g y(x + h) by Taylor's series m e t h o d

y(x + h) = y(x) + h y'(x) + 1 ^ y^Cx) + ^ y-'Cx) +

...(1.6.1.a)
O n simplifying

y'(x) = y ' " ^ ' ' > - " ' " ' + 0(h) ...(1.6.2)
h
which is forward difference approximations for y'(x),
similarly expanding y(x - h) by Taylor's series m e t h o d , we
have

y(x-h) = y(x)-hy'(x) + ^ y ' ' ( x ) - ^ y ' " ( x ) +

...(1.6.1.b)
from which we obtain

y'(x) = y W - y ' " - " ' + 0(h) ...(1.6.3)


h
which is b a c k w a r d difference approximation for y'(x).
A central difference approximation for y'(x) c a n be
obtained by s u b t r a c t i n g (1.6.1b) from (1.4.1a), we have
y.(,,,y(x+h)-y(x-h)^Q^,^,^ ...(1.6.4)
2h

15
It is clear t h a t equation (1.6.4) is better approximation
to y'(x) than equation (1.6.2) and (1.6.3). On adding
equation (1.6.1a) a n d (1.6.1b), we get a n approximation for
y'Cx)

y(x-h)-2y(x)+y(x + h) , ^,^2-
^„^^^ ^ ^.-..j-.yy.j-ry,.^..j ^ ^^^2^ ...(1.6.5)

In a similar m a n n e r it is possible to derive difference


approximation to higher derivatives.
Now to solve the b o u n d a r y value problem, we derive
t h e range [XQJXJ^] in to n equal s u b intervals of width h. So
that,
Xi = Xo+ h ; i = l,2,....,n
The corresponding value of y at t h e s e points are
denoted by yCxj) = y(xo+ih) = y^ ; i = 0,l,,2, ,n, the above
e q u a t i o n (1.6.2), (1.6.3), (1.6.4) & (1.6.5) c a n now be written
as
f

yi == ^' + h^ '' + 0(h) ...(1.6.2a)

1
= ^' " ^'-^ + 0(h) ...(1.6.3a)
h
1
Yi =
= y i + i - yi-i + o(h^) ...(1.6.4a)
2h ^ ^
_ Yi-i- 2 y i + yj+i
Yi = 0(h2) ...(1.6.5a)

Let t h e (x, y) plane is dividing into a network of


rectangles of side Ax = h a n d Ay = k by drawing t h e sets of
lines
X = ih ; i = 0,1,2,....
y = jk ; j = 0,l,2,

16
The points of intersecting of t h e s e families of line are
called m e s h points, lattice p o i n t s or grid points. T h e n we
have, from above equation, e q u a t i o n (1.6. 2a) becomes

u, = '^''i ''^ + 0(h) ...(1.6.6)


h
E q u a t i o n (1.6.3a) becomes

u = -LiJ Liil + 0(h) ...(1.6.7)


h

E q u a t i o n (1.6.4a) b e c o m e s

Ux = ""'-''r^'-''^ + 0(h2) ...(1.6.8)

E q u a t i o n (1.6.5) becomes
u„ = "'-'•^"""/"'^'•^ + 0{h^) ...(1.6.9)
h
Similarly, we have t h e approximation w. r. t psirtial
derivative y a s
Uy = ^ ^ ^ + 0(k) ...(1.6.6a)

Uy = ''^ ^ '-' + 0(k) ...(1.6.7a)

Uy = OU "*" ^^^ ) ...(1.6.8a)

And
U; j , - 2U: j + Uj j , , ,
Uyy = -^^J-^ f '-^^^ + 0 ( k 2 ) ...(1.6.9a)

17
1.7 CLASSIFICATION OF PARTIAL DIFFERENTIAL
EQUATION :
An equation which involves partial derivatives of a n
u n k n o w n function of two or more i n d e p e n d e n t variables is
called a partial differential equation. These e q u a t i o n s arise
in connection with numerous physical and geometric
problems.
A partial differential equation can be classified
according to various properties. Some of t h e characteristic
properties are a s follows:
The order is the order of the highest derivatives t h a t
a p p e a r s in t h e equation, the dimension is t h e n u m b e r of
i n d e p e n d e n t variables in the equation, sometimes for initial
value problems, dimension refers to t h e n u m b e r of "Space"
variables while "time" is not counted. An equation is said to
be linear if it is linear in t h e u n k n o w n variable a n d its
derivatives. A n o n linear differential equation which is linear
in t h e derivatives of the u n k n o w n function is sometimes
referred to a s quasilinear.
The general second - order linear PDE is of t h e form

A—r + B +C—-+ D—+E—+Fu = G


2x 5x5y dy SK dy
which cam be written a s
Au^x +BUxy+CUyy+DUx +EUy+Fu = G ...(1.7.1)

where A, B, C, D, E, F a n d G are all functions of x a n d y.


E q u a t i o n of the form (1.7.1) classified with respect to
t h e sign of the discriminate As = B^ - 4AC in the following
way.

18
If As <0 at a point in (x, y) plane, the equation is said
to be of elliptic type, to be of hyperbolic type when As>0 at
that point and to be of parabolic type when As = 0 .
The particular cases of equation (1.7.1) namely
^xx + '^yy - ^ (The lap lace equation) ...(1.7,2)

u^x - (1/e^) Utt = 0 (The wave equation) ...(1.7.3)


and
^xx " Ut = 0 (The heat conduction equation)
...(1.7.4)
where (x, y) are space coordinates and t is the time
coordinate. It is easy to see that the Laplace equation is of
elliptic type, that the wave equation is of hyperbolic type
and that the heat conduction equation is of parabolic type.
The boundary conditions are important in the study of
partial differential equations. The boundary conditions can
also be of different types. Dirichilet condition consists in
prescribing only the values of the function on a hyper
surface. For Neumann condition, only the values of
derivative of the function along the normal to a hyper
surface are specified Cauchy condition define both function
value as well as the normal derivative along a hyper surface.
These boundary conditions apply to second - order
differential equations. For equations of higher order,
boundary condition may involve derivatives of higher order.
Clear the general form of parabolic type linear PDE
having one, two and three space variables are as follows :

19
"t = C^(UxX+Uyy+U,j

Similarly, the general form of hyperbolic type linear


PDE having one, two and three space variables are as
follows :

Utt = C ^ ( U x x + U y y )

Utt = C ^ ( U x x + U y y + U , J

Here an attempt is made to solve one as well as two


space variable of parabolic and hyperbolic equation.

1.8 SCOPE OF PRESENT WORK :


To obtain the numerical solution to the partial
differential equations through spline collocation method is
different approach. The thesis is devoted to study of an
application of spline, finite difference Eind finite element
method along with various features. Special emphasis is
given the applicability and reliability of the method of spline
collocation. All three methods are successfully applied to the
problems which describe the flow of electricity in the
transmission lines, heat conduction in a thin rod, heat flow
in thin rectangular plate, finite vibrating string and
vibrating membrane. The method is generalized to extend its
applications to solve parabolic as well as hyperbolic partial
differential equations in one and two space variable. Also
this method is extended to solve more than two spaces, non
linear parabolic as well as hyperbolic and elliptic partial

20
differential equation. In short for solution this thesis reflects
numerical experience with spline function as interpolate to
some physical problems, which are of particular interest in
the present work. It is well known fact that the integral
equations have also become the part of the study of certain
physical phenomena. The applications to solve the integral
equations are also sought. One can hopefully proceed in
this area. These will enlarge the dimensions of the
applicability of spline functions.

21

You might also like