Quadrature Based Optimal Iterative Methods: Corresponding Authors. E-Mails: Sanjay - Khattri@hsh - No, Agarwal@fit - Edu
Quadrature Based Optimal Iterative Methods: Corresponding Authors. E-Mails: Sanjay - Khattri@hsh - No, Agarwal@fit - Edu
r
X
i
v
:
1
0
0
4
.
2
9
3
0
v
1
[
m
a
t
h
.
N
A
]
1
6
A
p
r
2
0
1
0
Quadrature based optimal iterative methods
Sanjay K. Khattri,
1
Department of Engineering, Stord Haugesund University College, Norway
Ravi P. Agarwal,
Department of Mathematical Sciences, Florida Institute of Technology,
Melbourne, Florida 32901-6975, U.S.A.
Abstract: We present a simple yet powerful and applicable quadrature based scheme for constructing
optimal iterative methods. According to the, still unproved, Kung-Traub conjecture an optimal itera-
tive method based on n + 1 evaluations could achieve a maximum convergence order of 2
n
. Through
quadrature, we develop optimal iterative methods of orders four and eight. The scheme can be further
applied to develop iterative methods of even higher order. Computational results demonstrate that the
developed methods are ecient as compared with many well known methods.
Mathematics Subject Classication (2000): 65H05 65D99 41A25
Keywords: iterative methods; fourth order; eighth order; quadrature; Newton; convergence; nonlin-
ear; optimal.
1 Introduction
Many problems in science and engineering require solving nonlinear equation
f(x) = 0, (1)
[1-13]. One of the best known and probably the most used method for solving the preceding
equation is the Newtons method. The classical Newton method is given as follows (NM)
x
n+1
= x
n
f(x
n
)
f
(x
n
)
, n = 0, 1, 2, 3, . . . , and |f
(x
n
)| = 0. (2)
The Newtons method converges quadratically [1-13]. There exists numerous modications of
the Newtons method which improve the convergence rate (see [1-21] and references therein).
This work presents a new quadrature based scheme for constructing optimal iterative methods
of various convergence orders. According to the Kung-Traub conjecture an optimal iterative
method based upon n + 1 evaluations could achieve a convergence order of 2
n
. Through the
1
Corresponding authors.
E-mails: [email protected], [email protected]
scheme, we construct optimal fourth order and eighth order iterative methods. Fourth order
method requests three function evaluations while the eighth order method requests four function
evaluations during each iterative step. The next section presents our contribution.
2 Quadrature based scheme for constructing iterative meth-
ods
Our motivation is to develop a scheme for constructing optimal iterative methods. To construct
higher order method from the Newtons method (2), we use the following generalization of the
Traubs theorem (see [9, Theorem 2.4] and [13, Theorem 3.1]).
Theorem 1. Let g
1
(x), g
2
(x),. . .,g
s
(x) be iterative functions with orders r
1
, r
2
,. . .,r
s
, respec-
tively. Then the composite iterative functions
g(x) = g
1
(g
2
( (g
z
(x)) ))
dene the iterative method of the orders r
1
r
2
r
3
r
s
.
From the preceding theorem and the Newton method (2), we consider the fourth order
modied double Newton method
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
x
n+1
= y
n
f(y
n
)
f
(y
n
)
.
(3)
Since the convergence order of the double Newton method is four and it requires four evaluations
during each step. Therefore, according to the Kung and Traub conjecture, for the double
Newton method to be optimal it must require only three function evaluations. By the Newtons
theorem the derivative in the second step of the dobule Newton method can be expressed as
f
(y
n
) = f
(x
n
) +
_
yn
xn
f
(t) dt =
1
f(x
n
) +
2
f(y
n
) +
3
f
(x
n
). (5)
To determine the real constants
1
,
2
and
3
in the preceding equation, we consider the
equation is valid for the three functions: f(t) = constant, f(t) = t and t(t) = t
2
. Which yields
the equations
_
_
_
1
+
2
= 0,
1
x
n
+
1
y
n
+
3
= 0,
1
x
2
n
+
2
y
2
n
+
3
2 x
n
= 2 (y
n
x
n
).
(6)
Solving the preceding equations and substituting the values in the equations (4) and (5), we
obtain
f
(y
n
) = 2
_
f(y
n
) f(x
n
)
y
n
x
n
_
f
(x
n
). (7)
2
Combining the double Newton method and preceding approximation for the derivative, we
propose the method (M-4)
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
x
n+1
= y
n
f(y
n
)
2
_
f(y
n
) f(x
n
)
y
n
x
n
_
f
(x
n
)
.
(8)
Since the method (M-4) is fourth order convergent and it requests only three evaluations. Thus
according to the Kung-Traub conjecture it is an optimal method. We prove the fourth order
convergent behavior of the iterative method (8) through the following theorem.
Theorem 2. Let be a simple zero of a suciently dierentiable function f : D R R in
an open interval D. If x
0
is suciently close to , the convergence order of the method (8) is
4 and the error equation for the method is given as
e
n+1
=
_
c
3
c
1
c
2
2
_
c
2
c
3
1
e
4
n
+ O
_
e
5
n
_
.
Here, e
n
= x
n
, c
m
= f
m
()/m! with m 1.
Proof. The Taylors expansion of f(x) and f
(x
n
) around the solution is given as
f(x
n
) = c
1
e
n
+ c
2
e
2
n
+ c
3
e
3
n
+ c
4
e
4
n
+ O
_
e
5
n
_
, (9)
f
(x
n
) = c
1
+ 2 c
2
e
n
+ 3 c
3
e
2
n
+ 4 c
4
e
3
n
+ O
_
e
4
n
_
. (10)
Here, we have accounted for f() = 0. Dividing the equations (9) and (10) we obtain
f(x
n
)
f
(x
n
)
= e
n
c
2
c
1
e
n
2
2
c
3
c
1
c
2
2
c
1
2
e
n
3
3 c
4
c
1
2
7 c
3
c
2
c
1
+ 4 c
2
3
c
1
3
e
n
4
+ O
_
e
n
5
_
. (11)
From the rst step of the method (8) and the equations (9) and (10), we obtain
y
n
= +
c
2
c
1
e
n
2
+ 2
c
3
c
1
c
2
2
c
1
2
e
n
3
+
3 c
4
c
1
2
7 c
2
c
3
c
1
+ 4 c
2
3
c
1
3
e
n
4
+ O
_
e
n
5
_
. (12)
By the Taylors expansion of f(y
n
) around x
n
and using the st step of the method (8), we get
f(y
n
) = f(x
n
) + f
(x
n
)
_
f(x
n
)
f
(x
n
)
_
+
1
2
f
(x
n
)
_
f(x
n
)
f
(x
n
)
_
2
+ , (13)
the successive derivatives of f(x
n
) are obtained by dierentiating (10) repeatedly. Substituting
these derivatives and using the equations (11) into the former equation
f(y
n
) = c
2
e
2
n
+ 2
c
3
c
1
c
2
2
c
1
e
3
n
+
3 c
4
c
1
2
7 c
2
c
3
c
1
+ 5 c
2
3
c
1
2
e
4
n
+ O
_
e
5
n
_
. (14)
Finally substituting from the equations (9), (10) and (14) into the second step of the contributed
method (8), we obtain the error equation for the method
e
n+1
=
_
c
3
c
1
c
2
2
_
c
2
c
3
1
e
4
n
+ O
_
e
5
n
_
. (15)
Therefore the contributed method (8) is fourth order convergent. This completes our proof.
3
To construct optimal eighth order optimal method, we consider the method
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
z
n
= y
n
f(y
n
)
2
_
f(y
n
) f(x
n
)
y
n
x
n
_
f
(x
n
)
,
x
n+1
= z
n
f(z
n
)
f
(z
n
)
.
(16)
Since the order of the method (8) is four and order of the method (2) is two. Therefore by the
theorem (1) convergence order of the method (16), which is a combination of the methods (8)
and (2), is eighth. The method (16) require ve function evaluations therefore, according to
the Kung-Traub conjecture, it is not an optimal method. To develop an optimal method let us
again express the rst derivative by the Newtons theorem
f
(z
n
) = f
(x
n
) +
_
zn
xn
f
(t) dt =
1
f(x
n
) +
2
f(y
n
) +
3
f(z
n
) +
4
f
(x
n
), (18)
to determine the real constants,
1
,
2
,
3
and
4
in the preceding equation, we consider the
equation is valid for the four functions: f(t) = constant, f(t) = t, t(t) = t
2
and f(t) = t
3
. And,
we obtain the four equations
_
1
+
2
+
3
= 0,
1
x
n
+
2
y
n
+
3
z
n
+
4
= 0,
1
x
2
n
+
2
y
2
n
+
3
z
2
n
+ 2
4
x
n
= 2 (z
n
x
n
),
1
x
3
n
+
2
y
3
n
+
3
z
3
n
+ 3
4
x
2
n
= 3 (z
2
n
x
2
n
).
from the preceding equation and the equations (17), (18), we get
f
(z
n
) =
1
(y
n
+ x
n
)
2
(z
n
+ y
n
) (z
n
+ x
n
)
_
(z
n
+ y
n
)
2
(z
n
+ x
n
) (y
n
+ x
n
) f
(x
n
)
(x
n
y
n
)
2
(2 y
n
3 z
n
+ x
n
) f (z
n
) + (x
n
z
n
)
3
f (y
n
) (y
n
z
n
)
2
(3 x
n
2 y
n
z
n
) f (x
n
)
_
.
Combining the eighth order method (16) and the preceding equation, we propose the following
optimal eighth order iterative method (M-8)
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
z
n
= y
n
f(y
n
)
2
_
f(y
n
) f(x
n
)
y
n
x
n
_
f
(x
n
)
,
x
n+1
= z
n
f(z
n
)
(y
n
+ x
n
)
2
(z
n
+ y
n
) (z
n
+ x
n
)
_
(z
n
+ y
n
)
2
(z
n
+ x
n
)
(y
n
+ x
n
) f
(x
n
) (x
n
y
n
)
2
(2 y
n
3 z
n
+ x
n
) f (z
n
)
+(x
n
z
n
)
3
f (y
n
) (y
n
z
n
)
2
(3 x
n
2 y
n
z
n
) f (x
n
)
_
.
(19)
4
Since the method (M-8) is eighth order convergent and it requests only four evaluations during
each iteration. Thus according to the Kung-Traub conjecture it is an optimal method. We
prove the eighth order convergent disposition of the iterative method (8) through the following
theorem.
Theorem 3. Let be a simple zero of a suciently dierentiable function f : D R R in
an open interval D. If x
0
is suciently close to , the convergence order of the method (19) is
8. The error equation for the method (19) is given as
e
n+1
=
c
2
2
_
c
3
c
1
3
c
4
c
4
c
1
2
c
2
2
c
3
2
c
1
2
c
2
+ 2 c
3
c
1
c
2
3
c
2
5
_
c
1
7
e
8
n
+ O(e
n
)
9
.
Proof. Substituting from the equations (9), (10), (11), (14) into the second step of the con-
tributed method (19) yields
z
n
=
_
c
3
c
1
c
2
2
_
c
2
c
3
1
e
4
n
2
c
2
c
4
c
1
2
+ c
3
2
c
1
2
4 c
3
c
1
c
2
2
+ 2 c
2
4
c
1
4
e
n
5
+ O
_
e
n
6
_
. (20)
Here, we have used the rst step of the method (19). To nd a Taylor expansion f(z
n
), we
consider the Taylors series of f(x) around y
n
f(z
n
) = f(y
n
) + f
(y
n
) (z
n
y
n
) +
f
(y
n
)
2
(z
n
y
n
)
2
+ , (21)
substituting from equation (14) and using the second step of the contributed method (19), we
obtain
f(z
n
) =
_
c
3
c
1
c
2
2
_
c
2
c
1
2
e
n
4
2
c
2
c
4
c
1
2
+ c
3
2
c
1
2
4 c
3
c
1
c
2
2
+ 2 c
2
4
c
1
3
e
n
5
+ O
_
e
n
6
_
. (22)
Here, the higher order derivatives of f(x) at the point y
n
are obtained by dierentiating the
equation (14) with respect e
n
. Finally, to obtain the error equation for the method (19), substi-
tuting from the equations (9), (10), (14), (12), (20) (22) into the third step of the contributed
method (19) yields the error equation
e
n+1
=
c
2
2
_
c
3
c
1
3
c
4
c
4
c
1
2
c
2
2
c
3
2
c
1
2
c
2
+ 2 c
3
c
1
c
2
3
c
2
5
_
c
1
7
e
8
n
+ O(e
n
)
9
, (23)
which shows that the convergence order of the contributed method (19) is 8. This completes
our proof.
3 Numerical examples
Let us review some well known methods for numerical comparison. Based upon the well known
Kings method [12] and the Newtons method (2), recently Li et al. constructed a three step
5
and sixteenth order iterative method (LMM)
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
z
n
= y
n
2 f(x
n
) f(y
n
)
2 f(x
n
) 5 f(y
n
)
f(y
n
)
f
(x
n
)
,
x
n+1
= z
n
f(z
n
)
f
(z
n
)
2 f(z
n
) f
_
z
n
f(z
n
)
f
(z
n
)
_
2 f(z
n
) 5 f
_
z
n
f(z
n
)
f
(z
n
)
_
f
_
z
n
f(z
n
)
f
(z
n
)
_
f
(z
n
)
,
(24)
[16]. Based upon the Jarratts method [6], recently Ren et al. [17, 18] formulated a sixth order
convergent iterative family consisting of three steps and two parameters (RWB)
_
_
y
n
= x
n
2
3
f(x
n
)
f
(x
n
)
,
z
n
= x
n
3 f
(y
n
) + f
(x
n
)
6 f
(y
n
) 2 f
(x
n
)
f(x
n
)
f
(x
n
)
,
x
n+1
= z
n
(2a b)f
(x
n
) + bf
(y
n
) + cf(x
n
)
(a b)f
(x
n
) + (3a + b)f
(y
n
) + cf(x
n
)
f(z
n
)
f
(x
n
)
,
(25)
where a, b, c R and a=0. Wang et al. [18] also developed a sixth order convergent iterative
family, based upon the well known Jarratts method, for solving non-linear equations. Their
methods consist of three steps and two parameters (WKL)
_
_
y
n
= x
n
2
3
f(x
n
)
f
(x
n
)
,
z
n
= x
n
3 f
(y
n
) + f
(x
n
)
6 f
(y
n
) 2 f
(x
n
)
f(x
n
)
f
(x
n
)
,
x
n+1
= z
n
(5 + 3)f
(x
n
) (3 + )f
(y
n
)
2f
(x
n
) + 2 f
(y
n
)
f(z
n
)
f
(x
n
)
,
(26)
where , R and + =0. Earlier, Neta [20] has developed a sixth order convergent family
of methods consisting of three steps and one paramete (NETA)
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
z
n
= y
n
f(x
n
) + af(y
n
)
f(x
n
) + (a 2)f(y
n
)
f(y
n
)
f
(x
n
)
,
x
n+1
= z
n
f(x
n
) f(y
n
)
f(x
n
) 3f(y
n
)
f(z
n
)
f
(x
n
)
.
We may notice that, in the preceding method, with the choice a = 1 the correcting factor in
the last two steps is the same. Chun and Ham [21] also developed a sixth order modication
of the Ostrowskis method. Their family of methods consist of three-steps (CH)
_
_
y
n
= x
n
f(x
n
)
f
(x
n
)
,
z
n
= y
n
f(x
n
)
f(x
n
) 2f(y
n
)
f(y
n
)
f
(x
n
)
,
x
n+1
= z
n
H(u
n
)
f(z
n
)
f
(x
n
)
,
(27)
6
where u
n
= f(y
n
)/f(x
n
) and H(t) represents a real valued function satisfying H(0) = 1, H
(0) =
2. In the case
H(t) =
1 + ( + 2)t
1 + t
(28)
the third substep is similar to the method developed by Sharma and Guha [19]. The classical
Chebyshev method is expressed as (CM)
x
n+1
= x
n
f(x
n
)
f
(x
n
)
_
1 +
1
2
f
(x
n
)f(x
n
)
f
(x
n
)
2
_
, (29)
[15] and the classical Halley method is expressed as (HM)
x
n+1
= x
n
f(x
n
)
f
(x
n
)
_
2 f
(x
n
)
2
2 f
(x
n
)
2
f
(x
n
)f(x
n
)
_
, (30)
[15]. The convergence order of an iterative method is dened as
lim
n
|e
n+1
|
|e
n
|
= c = 0,
and furthermore this leads to the following approximation of the computational order of con-
vergence (COC)
ln |(x
n+1
)/(x
n
)|
ln |(x
n
)/(x
n1
)|
.
For convergence it is required: |x
n+1
x
n
| < and |f(x
n
)| < . Here, = 10
320
. We test the
methods for the following functions
f
1
(x) = x
3
+ 4 x
2
10, 1.365.
f
2
(x) = x exp(x
2
) sin
2
(x) + 3 cos(x) + 5, 1.207.
f
3
(x) = sin
2
(x) x
2
+ 1, 1.404.
f
4
(x) = tan
1
(x) =0.
f
5
(x) = x
4
+ sin
_
/x
2
_
5, =
2.
f
6
(x) = e
(x
2
+x+2)
1, =2.0.
Computational results are reported in the Table 1 and the Table 2. The Table 1 presents (num-
ber of functional evaluations, COC during the second last iterative step) for various methods.
While the Table 2 reports |x
n+1
x
n
| for the method M-8. Free parameters are randomly
selected as: for the method RWB a = b = c = 1, in the method by Chun et al. (CH) = 1,
in the method WKL = = 1, in the method NETA a = 10.
An optimal iterative method for solving nonlinear equations must require least number of
function evaluations. In the Table 1, methods which require least number of function evaluations
are marked in bold. We acknowledge, through the Table 1, that the contributed methods in
this article are showing better performance to the existing methods in the literature.
7
f(x) x0 HM CM LMM NM RWB NETA CH WKL M-4 M-8
f1(x) 1.2 (27, 3) (27, 3) (24, 16) (20, 2) (20, 3) (20, 6) (20, 6) (20, 3) (18, 4) (16, 8)
f2(x) 1.0 (24, 3) (24, 3) (24, 15.5) (22, 2) (20, 3) (20, 6) (20, 6) (20, 3) (18, 4) (16, 8)
f3(x) 1.5 (21, 3) (21, 3) (18, 15.8) (20, 2) (20, 3) (20, 6) (20, 6) (20, 3) (18, 4) (16, 8)
f4(x) 0.5 (21, 3) (21, 3) (24, 24) (18, 2) (20, 3) (16, 7) (16, 7) (20, 3) (18, 5) (16, 11)
f5(x) 1.3 (24, 3) (24, 3) (24, 16) (20, 2) (20, 3) (20, 6) (20, 6) (20, 3) (18, 4) (16, 8)
f6(x) 1.2 (24, 3) (27, 3) (24, 16) (26, 2) (20, 3) (20, 6) (20, 6) (20, 6) (21, 4) (20, 8)
Table 1: (number of functional evaluations, COC) for various iterative methods.
f1(x) f2(x) f3(x) f4(x) f5(x) f6(x)
1.6 10
1
2.0 10
1
9.5 10
2
4.2 10
1
1.1 10
1
7.9 10
1
3.3 10
9
2.4 10
6
4.0 10
10
1.7 10
5
1.1 10
8
8.6 10
4
6.4 10
71
1.9 10
45
6.3 10
77
3.1 10
55
2.8 10
65
2.9 10
25
1.1 10
564
2.8 10
358
2.5 10
611
1.9 10
601
5.8 10
518
5.8 10
197
*********** *********** *********** *********** *********** 1.3 10
1570
Table 2: Generated |x
n+1
x
n
| with n 1 by the method M-8. For initialization see the
Table 1.
References
[1] Weerakoon S., Fernando T.G.I., A Variant of Newtons Method with Accelerated Third-
Order Convergence. Appl. Math. Lett. 13 (8): 8793 (2000).
[2] Frontini M., Sormani E., Some variant of Newtons method with third-order convergence.
Appl. Math. Comput. 140: 419426 (2003).
[3] Homeier H.H.H., On Newton-type methods with cubic convergence. J. Comput. Appl. Math.
176: 425432 (2005).
[4] Homeier H.H.H., A modied Newton method for rootnding with cubic convergence. J.
Comput. Appl. Math. 157: 227230 (2003).
[5] Kung H.T., Traub J.F., Optimal Order of One-Point and Multipoint Iteration. J. Assoc.
Comput. Math. 21: 634651 (1974).
[6] Jarratt P., Some Fourth Order Multipoint Iterative Methods for Solving Equations. Math.
Comp. 20(95): 434437 (1966).
[7] Chun C., Some fourth-order iterative methods for solving nonlinear equations. Appl. Math.
Comput. 195(2): 454459 (2008).
[8] Maheshwari A.K., A fourth-order iterative method for solving nonlinear equations. Appl.
Math. Comput. 211(2): 383391 (2009).
8
[9] Traub J.F., Iterative Methods for the Solution of Equations. Prentice Hall, New York, 1964.
[10] Ostrowski A.M., Solution of Equations and Systems of Equations. Academic Press, New
York-London, 1966.
[11] Kou J., Li Y., Wang X., A composite fourth-order iterative method. Appl. Math. Comput.
184: 471475 (2007).
[12] King R., A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal.
10: 876879 (1973).
[13] Petkovic M.S., On a General Class of Multipoint Root-Finding Methods of High Compu-
tational Eciency. SIAM. J. Numer. Anal. 47(6): 44024414 (2010).
[14] Khattri S.K., Altered Jacobian Newton Iterative Method for Nonlinear Elliptic Problems,
IAENG Int. J. Appl. Math. 38: (2008)
[15] Kanwar V., Tomar S.K., Modied families of Newton, Halley and Chebyshev methods,
Appl. Math. Comput. 192(1): 2026 (2007).
[16] Li X., Mu C., Ma J., Wang C., Sixteenth order method for nonlinear equations, Appl.
Math. Comput. 215(10): 37694054 (2009).
[17] Ren H., Wu Q., Bi W., New variants of Jarratts method with sixth-order convergence.
Numer. Algorithms: 52, 585-603 (2009).
[18] Wang X., Kou J., Li Y., A variant of Jarratt method with sixth-order convergence. Appl.
Math. Comput., 190: 1419 (2008).
[19] Sharma J.R., Guha R.K., A family of modied Ostrowski methods with accelerated sixth
order convergence, Appl. Math. Comput., 190: 111115 (2007).
[20] Neta B., A Sixth-Order Family of Methods for Nonlinear Equations, Int. J. Comput. Math.,
7: 157161 (1979).
[21] Chun C., Ham Y. Some sixth-order variants of Ostrowski root-nding methods. Appl.
Math. Comput., 193: 389394 (2003).
9