Using CAS Calculators To Teach and Explore Numerical Methods
Using CAS Calculators To Teach and Explore Numerical Methods
Abstract
We describe the use of CAS calculators in a numerical methods mathematics subject of-
fered to third year pre-service teachers. We show that such calculators, although very
low-powered compared with standard computer based numerical systems, are quite capable
of handling text-book problems, and as such provide a very accessible learning environ-
ment. We show how CAS calculators can be used to implement some standard numerical
procedures, and we also briefly discuss the pedagogical values of our approach.
1 Introduction
Numerical methods: the area of mathematics concerned with finding approximate solutions
to intractable problems, has long been the preserve of high-powered computers and computer
systems. And the increasing speed and power of computers, and the development of accessible
software, has meant an increase in the availability of new methods. However, teaching of such
material has either meant accessibility to computers and software, or restriction to simplified
problems which can be done on a scientific calculator. The advent of CAS calculators, such
as the TI-nspire and the Casio ClassPad, has meant that for the first time students have easy
access to a powerful computing environment, and one also which they are likely to have seen
at school.
Note that since this article was first written a third CAS calculator: the HP Prime, has
entered the market. It is not yet clear what impact this will have on the use of such calculators
in education.
At Victoria University, (Melbourne, Australia), a “Computational Methods” subject is of-
fered as an elective for third year pre-service teachers who have chosen mathematics as their
principal teaching “method”. The students have had a solid grounding in calculus, statistics,
and some algebra, and have satisfied the requirement by the local government body to be able
to teach mathematics up to and including upper secondary levels. The students have also had
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
some exposure to the use of technology, in particular CAS calculators earlier in the course. The
computational methods subject is designed to provide the students with a greater expertise in
the use of CAS calculators—including programming—and also to introduce them to a mathe-
matical discipline (numerical methods) which they are most likely to encounter as teachers or
practitioners. Note that the current curricula mandates the use of CAS calculators at year 12
(the final year of secondary school), and so an expertise in their use is now necessary on the
part of mathematics teachers
66
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
Here’s how it could be implemented on each calculator, for solving x5 + x2 − 1 = 0, given that
there is a root between 0 and 1:
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
Enter the function f(x) := x5 + x2 − 1 and a As with the TI-nspire we enter the function
little function called “bisect1” which will per- f(x), and also create a small bisect1 program
form a single bisection step: with a single parameter a (which will be a list
of two values):
Define bisect1(a) = Func
Local t Local t
a[1] + a[2] l
(a[1]+a[2])⇒t
t :=
2 If f(a[1])×f(t)<0
If f(t) · f(a[1]) < 0 Then Then
Return {a[1], t} Return {a[1],t}
Else Else
Return {t, a[2]} Return {t,a[2]}
EndFunc
With each calculator the function can now be called as many times as liked (watching the two
endpoints getting closer together), or used within another program which either runs bisect1 a
given number of times or (better) until the difference between the endpoints is less than a give
value, such as 10−6 .
The standard derivative method is of course Newton’s method (or the Newton-Raphson
method), for which
f (xn )
xn+1 = xn − 0 .
f (xn )
Since each CAS calculator can perform symbolic derivatives, implementation is straightforward.
67
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
Define f(x) = x5 + x2 − 1 Define f(x)=x 5+x 2-1
f(x) f(x)
Define nr(x) = x − Define nr(x)=x-
d diff(f(x), x)
f(x) .8
dx
x:=0.8 nr(ans)
For i,1,8,x:=nr(x):Disp x:EndFor
The ClassPad doesn’t allow programming constructs (such as for-loops) outside of a program,
so we can just start of a Newton-Raphson computation, and press the EXE key to repeat the
previous command, and watch the values on the screen. Alternatively, we can use the Sequence
module, and define the recursive sequence
an+1 = nr(an ), a0 = 0.8
and tabulate a few values.
Other methods, such as the secant method, regula falsi, can be easily implemented using
very similar schemes to those shown.
We note that the calculators do of course have inbuilt routines for solving equations: for
example the TI-nspire has “nSolve”. However, the point is not simply to use the calculators
as black boxes to find a solution, but teach the students the means by which such solutions are
obtained. The students are encouraged to use the inbuilt routines to check their own solutions.
68
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
For the Gauss-Seidel method we start by en- To use a loop for the Gauss-Seidel method,
tering three initial guesses for x, y and z , we need to write a program, called, say gs:
and then apply the iteration:
{1,1,1}⇒{x,y,z}
x := 1 : y := 1 : z := 1 For 1⇒i To 10
5+y−2·z approx((5+y-2× z)/4)⇒x
For i, 1, 10 : x := :
4 approx((7-x+z)/5)⇒y
7−x+z −8 − 2 · x + y approx((-8-2× x+y)/3)⇒z
y := : z := :
5 6 Print {x,y,z}
Disp x, y, z : EndFor Next
2.3 Interpolation
Interpolation is the problem of fitting a (piecewise) polynomial to a sequence of data points.
Although we investigate cubic splines in the course, we just show here how to use Lagrangian
and Newton interpolation.
Given a set of n+1 data points {(xi , yi ), i = 0, 1, 2, . . . , n} we can fit an n-degree polynomial
to it. One standard method is the Lagrangian polynomial, defined as:
n
X (x − x0 )(x − x1 ) · · · (x − xk−1 )(x − xk+1 ) · · · (x − xn )
L(x) = yk .
k=0
(xk − x0 )(xk − x1 ) · · · (xk − xk−1 )(xk − xk+1 ) · · · (xk − xn )
Note that in the fraction, the top line consists of the product of all terms (x − xi ) except for
(x − xk ), and the bottom line of the product of all non-zero terms (xk − xi ). This polynomial
is more easily generated by first defining
q(x) = (x − x0 )(x − x1 )(x − x2 ) · · · (x − xn )
69
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
and then n
X q(x)
L(x) = yk .
k=0
(x − xk )q 0 (xk )
That these two definitions of L(x) are equivalent is an elementary calculus exercise. Alterna-
tively, we can set the polynomial to be
p(x) = a0 + a1 x + a2 x2 + · · · + an xn
and generate n linear equations for ai by substituting in turn x = xk and p(x) = yk . Then ai can
be found by standard linear methods. Suppose for example we wish to fit a cubic polynomial
to the four points
(xi , yi ) = (−3, −61), (1, −5), (2, −1), (5, 83).
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
First define the x and y values: The commands for the ClassPad are very sim-
ilar to those on the TI-nspire. With xs and
xs := {−3, 1, 2, 5} ys defined:
ys := {−61, −5, −1, 83}
DelVar x
Now using the second method: Define q(x)=prod(x-xs)
q(x)
:= product(x − xs)
q(x) sum( × ys)
d
(x − xs) × ( (q(x))|(x = xs))
q(x) dx
sum · ys
And the use of matrices is similar, except that
d
(x − xs) · (q(x))|(x = xs) the ClassPad doesn’t have the equivalent of
dx
a simult command for solving matrix equa-
The TI-nspire has very elegant list handling, tions. So we pre-multiply by the inverse in-
where an operation on a list will automati- stead:
cally be done on every element of the list. So
in the final sum, all operations on the lists listToMat(xs 3,xs 2,xs,xs 0)⇒xv
xs and ys are done on each corresponding el- vx -1× listToMat(ys)⇒c
ement individually and finally added. [[x 3,x 2,x,1]]× c
To find the polynomial by linear methods: [[xX3,xX2,x,1]]× c
If the x values are equally spaced, then a method called the Newton-Gregory difference formula
can be used. Suppose the x values are given by xk = x0 + kh (so h is the common difference),
and the y values are as before y0 , y1 , . . . , yn . Create a table of all successive differences of the
y values; each row of which is denoted ∆, ∆2 , down to ∆n−1 which will be a single value.
If the first values of the differences are denoted ∆0 , ∆1 , ∆2 , . . . , ∆n−1 , then the interpolating
70
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
x: −3 −1 1 3 5
y: 185 −31 −39 −367 −1159
∆ −216 −8 −328 −792
∆2 208 −320 −464
∆3 −528 −144
∆4 384
The circled numbers are the values ∆0 , ∆1 , ∆2 , ∆3 , ∆4 , from top to bottom. Then:
z z z z z
p(x) = 185 − 216 + 208 − 528 + 384
0 1 2 3 4
z(z − 1) z(z − 1)(z − 2) z(z − 1)(z − 2)(z − 3)
= 384 − 216z + 208 − 528 + 384
2 6 24
Substituting (x + 3)/2 for z in the above expression, and simplifying, produces
x4 − 11x3 − 17x2 + 7x − 19
A B C D E
1 0 1 2 3 4
2 185
3 −31
4 −39
5 −367
6 −1159
71
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
In cell B2, enter the expression “= A3 − A2” and copy it into the block of cells B2–E6:
A B C D E
1 0 1 2 3 4
2 185 −216 208 −528 384
3 −31 −8 −320 −144
4 −39 −328 −464
5 −367 −792
6 −1159
Notice that the row A2–E2 now consists of all the differences ∆k . Now in cell A7 enter the
formula
x+3
= nCr , A1
2
and copy that into cells B7–E7. These cells should now contain the polynomials
185, −108(x + 3), 26(x + 1)(x + 3), −11(x − 1)(x + 1)(x + 3), (x + 3)(x + 1)(x − 1)(x − 3).
Finally, add them all by entering “= sum(A7 : E7)” in an empty cell somewhere. This will
produce the interpolating polynomial.
Similar methods can be used to implement Neville’s method, or the method of divided
differences (see Cheney and Kincaid [chen12] for discussions of these.) Note also that our
description above was in fact for the Newton-Gregory forward difference method; there are also
backwards and central difference methods.
2.4 Quadrature
Quadrature, or numerical integration, is a vital topic in any numerical methods course, and
deals with finding approximate values of definite integrals
Z b
f (x) dx
a
where f (x) has an anti-derivative not (easily) expressible in closed form. Examples are the
elliptic integrals Z φp
1 − k 2 sin2 x dx
0
2
for k < 1, which arose initially in conjunction with determining the arc length of an ellipse,
but have been shown since to have very deep properties connecting with many other branches
of mathematics. There are a huge number of different quadrature methods, and many of the
most useful approximate an integral with a finite sum of the form
72
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
where each xi ∈ [a, b]. The values wi are called “weights” and the xi values “abscissae” or
“ordinates”. In general the weights and ordinates are chosen so that the expression will be
exact for a particular class of functions.
One set of quadrature formulas are the “Newton-Cotes” rules, where the xi are chosen to
be equidistant, and for a given n the weights are chosen so that the approximation is correct
for all f (x) = xk for k ≤ n. For example, for n = 4, we have
Z b
f (x) dx ≈ w0 f (a) + w1 f (a + h) + w2 f (a + 2h) + w3 f (a + 3h) + w4 f (b)
a
where h = (b − a)/4, and we choose wk so that the expression is exact for f (x) = 1, x, x2 , x3 , x4 .
Since the weights will be independent of the limits of integration, we can choose a and b so that
h = 1: Z 4
f (x) dx ≈ w0 f (0) + w1 f (1) + w2 f (2) + w3 f (3) + w4 f (4).
0
The weights can then be found by substituting each of xk for f (x) in this expression, and so
obtaining linear equations for the wi values, which can then be easily solved. Given that
Z 4
1 k+1
xk dx = 4
0 k+1
the linear equations will be
w0 + w1 + w2 + w3 + w4 = 4
w1 + 2w2 + 3w3 + 4w4 = 8
w1 + 4w2 + 9w3 + 16w4 = 64 3
w1 + 8w2 + 27w3 + 64w4 = 64
w1 + 16w2 + 81w3 + 256w4 = 1024
5
where as above h = (b − a)/4. This particular quadrature rule is known as the Newton-Cotes
rule of order four, or Boole’s rule.
We show how this rule, and clearly other Newton-Cotes rules, can be easily developed on a
CAS calculator.
73
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
n := 4 Rn k ThisRhas to be written as a program.
ys := seq(( 0 x dx), k, 0, n) seq( ((xk ), x, 0, n), k, 0, n) ⇒ y
i−1
m := constructMat((j − 1) , i, j, n + 1, n + 1) fill(0, n + 1, n + 1) ⇒ m
m[1, 1] := 1 For 1 ⇒ i To n + 1
0
w := simult(m, listmat(ys) ) For 1 ⇒ j To n + 1
w0 (j − 1)^(i − 1) ⇒ m[i, j]
Next
Next
1 ⇒ m[1, 1]
m−1 × listToMat(y) ⇒ w
Return trn(w)
The matrix M of coefficients can be defined by mij = (j − 1)i−1 , assuming that 00 returns
1. Both the ClassPad and the TI-nspire return “undefined”, so that the value m11 has to be
entered separately.
Having created weights, we can use such a rule to evaluate a definite integral. For example,
suppose we approximate Z 1
2
e−x dx
0
which has a value ≈ 0.746824132812, using a value of h = 0.05, so that there are 5 uses of
Boole’s rule. Given the weights in an array w, and h, we can implement the approximation
using
Z 1 4 5 !
2
X X 4k + i − 1
e−x dx ≈ h wi f .
0 k=0 i=1
20
Note that both calculators adopt indexing whereby the first element of a list is indexed with
1. This expression can be entered into the calculators almost unchanged, and the result is
0.746824132917 which is in error by only about 10−10 . And in fact, for a Newton-Cotes rule of
order m, applied to the integral Z b
f (x) dx
a
This is slightly inefficient in that some function values will be computed twice, however in
practice this inefficiency is not noticeable.
As with solving equations, both calculators can solve integrals numerically using inbuilt
routines, and as we noted previously students are encouraged to use these routines to check
their own solutions, and to approximate the errors in their calculations.
74
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
xn+1 = xn + h
yn+1 = yn + hf (xn , yn ).
A problem with Euler’s method is that in general it is very inaccurate, and errors tend to
accumulate with each step. For example, suppose we take the IVP
dy 1 x 1
= xy + , y(0) =
dx 2 3 3
which can be easily solved to produce
2 /4 2
y = ex − .
3
With a step size h = 0.5, we can compute y(xn ) by the exact solution, and the approximate
values yn as computed by Euler’s method:
xn y(xn ) yn Error
0.0 0.333333 0.333333 0.0
0.5 0.397828 0.333333 0.064494
1.0 0.617359 0.458333 0.159025
1.5 1.088388 0.739583 0.348805
2.0 2.051615 1.266927 0.784688
2.5 4.104066 2.233724 1.870343
3.0 8.821069 4.046468 4.774601
3.5 20.714276 7.581319 13.132957
4.0 53.931483 14.798307 39.133176
The errors clearly increase. This can be shown again in the diagram in figure 1.
Students can play with Euler’s method very easily. First, an exact solution can be computed:
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
x·y x
f(x, y) := + Define f(x, y) = x × y/2 + x/3
2 3
1 dSolve(y0 = f(x, y), x, y, x = 0, y = 1/3)
dsolve y0 = f(x, y) and y(0) = , x, y
3
75
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
True solution
With each calculator, the solution can be turned into a function, say s(x), which can be
plotted. Euler’s method can be implemented in a spreadsheet. Defining h as 0.5, a spreadsheet
is created where column A contains the x values 0, 0.5, 1, 1.5, . . . , 3.5, 4.0, and cell B1 contains
the value y0 ; in this case 1/3. In cell B2 enter “= B1 + h × f(A1, B1)” and this formula can be
copied down column B.
It is not hard to show how Euler’s method can be improved; in fact Euler’s method may be
considered as first order Taylor series approximation; if
h2 00 h3
y(x + h) = y(x) + hy 0 (x) + y (x) + y 000 (x) + · · ·
2 6
then a truncation after the second term produces
which is Euler’s method. Other methods provide accuracy equal to higher order Taylor series,
and one very popular family of methods are the Runge-Kutta methods, where a high-order
Taylor series is obtained by a judicious use of nested functions. These are extraordinarily
difficult to develop—there is a great deal of complicated algebra involved—but the results can
have a pleasing elegance. One fourth order method is defined as:
k1 = f (xn , yn ),
h h
k2 = f xn + , yn + k1 ,
2 2
h h
k3 = f xn + , yn + k2 ,
2 2
k4 = f (xn + h, yn + hk3 )
h
yn+1 = yk + (k1 + 2k2 + 2k3 + k4 )
6
76
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
where as for Euler’s method xn+1 = xn + h. (One of the very few texts which provides a full
algebraic construction of this method is the venerable text of Ralston & Rabinowitz [rals65].)
Applying this to the above equations produces these values:
xn y(xn ) yn Error
0.0 0.333333 0.333333 0.0
0.5 0.397828 0.397827 0.000001
1.0 0.617359 0.617345 0.000009
1.5 1.088388 1.088322 0.000044
2.0 2.051615 2.051205 0.000410
2.5 4.104066 4.101802 0.002264
3.0 8.821069 8.809320 0.011749
3.5 20.714276 20.654545 0.059731
4.0 53.931483 53.624082 0.307401
The errors are very much smaller than in Euler’s method, even though they increase with
each step. This is to be expected, and these errors can be made smaller either by using a
smaller step size (with a step size of 0.1 and 40 steps, the approximate value at x = 4.0 is
about 0.000848 in error), or by using two Runge-Kutta methods simultaneously, and adjusting
the step size each step according to the errors between the two methods. Figure 2 shows the
remarkable precision of a single Runge-Kutta method, even with a fairly large step size, as
compared to Euler’s method.
True solution
Runge-Kutta values
As for Euler’s method, this can be implemented as a spreadsheet. Start by entering the
x values (in our example from 0 to 4 in steps of h = 0.5) in column A and the value y(0) =
1/3 in cell B1. In cells C1 to F1 enter the values of the ki : in C1 enter “=f(a1,b1)”, in
D1 enter “=f(a1+h/2,b1+h/2*c1)”, in E1 enter “=f(A1+h/2,B1+h/2*D1” and in F1 enter
“=f(A1+h,B1+h*E1)”. Then in cell B2 enter “=c1+h/6*(c1+2*d1+2*e1+f1)”. Then copy cells
C1–F1 to cells C2–F2, and finally copy cells B2–F2 down as far as needed.
In both the TI-nspire and the Casio ClassPad there are methods for producing graphs
similar to those shown in figures 1 and 2.
77
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
Newer versions of the calculators, or of their operating systems, include methods for com-
puting Runge-Kutta or Euler steps. However, as for previous topics, we are keen to provide
the students with some deeper knowledge about these methods’ use and practice, hence we
encourage creating the algorithms from scratch. It is also quite possible to write programs to
implement these methods:
Using the TI-nspire CAS calculator Using the Casio ClassPad calculator
Enter the function As with the TI-nspire we enter the function
x·y 3 f(x,y), and also create a function euler which
f(x, y) := + will have a, b, h and n as parameters:
2 x
and a program called “euler” which will per- Local xp,yp,xn,yn
form as many iterations of Euler’s method as a⇒xp
required. b⇒yp
Define euler(a,b,h,n) = Print {xp,yp}
Prgm For 1⇒i to n
Local xn,yn,xp,yp xp+h⇒xn
xp := a yp+h×f(xp,yp)⇒yn
yp := b approx(xn)⇒xp
Disp xp, yp approx(yn)⇒yp
For i,1,n b⇒yp
xn := xp + h Print {xp,yp}
yn := yp + h · f(xp, yp) Next
xp := xn
yp := yn
Disp xp, yp
EndFor
EndPrgm
Note that in both programs, the values xp and yp represent the current values of x and y, and
the values xn, yn the new values computed by Euler’s method. It is a trivial matter to edit
these programs to implement the Runge-Kutta method described above.
In the classes we show both approaches to students: spreadsheets and programs, and invite
the students to choose their preferred method.
3 Conclusions
We have shown that many standard numerical tools can be developed and explored on a CAS
calculator, and that in this respect the two current (as of the time of writing) competitors are
equivalent in their functionality and their accessibility. And it is quite possible to go a great
deal further: to investigate other methods than the few discussed above, for example topics such
as error propagation, computation of eigensystems, or a deeper investigation into differential
78
The Electronic Journal of Mathematics and Technology, Volume 9, Number 1, ISSN 1933-2823
4 Acknowledgements
The author gratefully acknowledges the insightful comments of the reviewers, who suggested
many ways that the original article could be improved.
79