0% found this document useful (0 votes)
27 views

Lmwhitfieldpdf

jkkk

Uploaded by

kgizachewy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Lmwhitfieldpdf

jkkk

Uploaded by

kgizachewy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

The Application of Optimal Control Theory

to Life Cycle Strategies.

Lucinda Mary Whit eld

Dissertation submitted in partial ful lment


for the Degree of Master of science
September 1993.
Abstract

The aim of this work, is to develop a numerical model, that could be used to
in the calculation of optimal life cycle strategies of given organisms. The theory
used from [6] assumes that the organisims have either a two phase 'bang-bang`
life cycle strategy or a three phase life cycle strategy with the second phase being
a singular arc.
Contents
1 Introduction 1
1.1 What is Optimal Control Theory? : : : : : : : : : : : : : : : : : : 1
1.2 Life Cycle Strategies : : : : : : : : : : : : : : : : : : : : : : : : : 2

2 Optimal Control Theory 4


2.1 Pontryagin's Principle : : : : : : : : : : : : : : : : : : : : : : : : 5
2.2 Singular Intervals : : : : : : : : : : : : : : : : : : : : : : : : : : : 8

3 Biological Problem 14
3.1 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14
3.2 Biological Problem : : : : : : : : : : : : : : : : : : : : : : : : : : 15
3.3 Two phase solution : : : : : : : : : : : : : : : : : : : : : : : : : : 16
3.3.1 Finding a two phase solution : : : : : : : : : : : : : : : : : 17
3.4 Three phase solution : : : : : : : : : : : : : : : : : : : : : : : : : 20
3.4.1 Finding a three phase solution : : : : : : : : : : : : : : : : 21

4 Development of Numerical Method 25


4.1 Solving the state equations : : : : : : : : : : : : : : : : : : : : : : 25
4.2 Solving the adjoint equations : : : : : : : : : : : : : : : : : : : : 28

i
4.3 Ways of nding optimal solutions : : : : : : : : : : : : : : : : : : 29
4.3.1 Projected gradient method : : : : : : : : : : : : : : : : : : 30
4.3.2 Conditional gradient method : : : : : : : : : : : : : : : : : 31

5 Numerical Results 34
5.1 The E ects of Initial Data : : : : : : : : : : : : : : : : : : : : : : 34
5.2 Testing Optimality : : : : : : : : : : : : : : : : : : : : : : : : : : 35
5.3 Convergence : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 36
5.4 Moving the Initial Switching Point : : : : : : : : : : : : : : : : : 37
5.5 Changing the Step Length in the Gradient Direction : : : : : : : 40

6 Conclusions 43

Acknowledgements 45

Bibliography 46

1
Chapter 1
Introduction
The aim of this work is to develop a numerical method, which will nd the optimal
allocation of resources between growth and reproduction for a given organism and,
hence, an organism's optimal life cycle strategy. A brief introduction to optimal
control theory and life cycle strategies is given here. In addition an overview of
the project's content and it's organisation is included.

1.1 What is Optimal Control Theory?

Optimal control theory arises from the consideration of physical systems, which
are required to achieve a de nite objective as \cheaply" as possible. The trans-
lation of the design objectives into a mathematical model gives rise to what is
known as the control problem. The essential elements of a control problem are:-

 The system which is to be \controlled"

 A system objective

 A set of admissible \controls" (inputs)


1
 A performance functional which measures the e ectiveness of a given \con-
trol action"

The system objective is a given state or set of states which may vary with
time. Restrictions or constraints, as they are normally called, are placed on the
set of controls (inputs) to the system; controls satisfying these constraints are said
to belong to the set of admissible controls. A formal de nition of the optimal
control problem and associated theory of Pontryagin's principle is looked at in
chapter two.

1.2 Life Cycle Strategies

An organism's life cycle strategy is determined by the way in which it allocates


energy between growth and reproduction. There are just two main ways in which
organisms allocate energy. These are:-

 Determinate growth results in a bang-bang allocation strategy. All energy


is initially allocated to growth until maturity, when maturity is reached all
energy is switched to reproduction.

 Indeterminate growth results in a split energy allocation. This allows both


growth and reproduction to take place simultaneously.

The general analytical model of the resource allocation problem developed in


[6] considers the process of resource allocation as an optimal control problem.
It uses Pontryagin's maximum principle to nd the optimal allocation of energy
between growth and reproduction for a given organism.

2
The idea of nding the optimal allocation of resources between growth and re-
production is the motivation behind this work. The aim is to develop a numerical
method for the optimisation problem.
The work begins by looking at the optimal control problem and Pontryagin's
Maximum principle before moving on to look at the biological problem and it's
analytical solution in greater detail. The numerical method for this problem is
then developed in chapter four. The method begins by solving the state and
adjoint equations for an arbitrary control , before looking at ways of making the
u

control optimal. The model developed is assessed by using trial problems to


u

which analytical solutions can be found.

3
Chapter 2
Optimal Control Theory
The optimal control problem can be formulated as the nding of the control vari-
ables ( ) (j=1,...n) and state variables ( ) (i=1,...m) satisfying the di erential
uj t xi t

equation
_= (
xi fi x; u; t ; )

with end conditions ( 0 ) = 0 , 0 speci ed, such that a particular control vector
x t x t

u =(
u1 ; :::un ) and state vector = (
T
x x1 ; :::xm ) minimises or maximises the cost
T

functional.
Z T
J = ( ( ) )+
 x T ;T ( )
F x; u; t dt:
t0

The admissible controls for this problem are in fact piecewise continuous functions
with a nite number of jump discontinuities on the interval [ 0 ]. The functions t ;T

F and f are continuously di erentiable with respect to , but only satisfy ax

Lipshitz condition with respect to . u

4
2.1 Pontryagin's Principle

Pontryagin's principle was developed to deal with control problems where the
variables were subject to magnitude constraints of the form:- j ( ) j . This ui t ki

implies that the set of nal states which can be achieved is limited. The assump-
tions made about , u F and f when de ning the optimal control problem are
assumed to hold for the , u F and of Pontryagin's Principle.
f

THEOREM 2.1 (Pontryagins Principle) Necessary conditions for a control


u

2 U, the admissible control region, to minimise
Z T
J u ( )= ( ( ) )+  x t ;t (
F x; u; t dt ) (2 1)
:
0

subject to
_= (
x f x; u; t ) (0) =
x x0 (2 2)
:

are that there exists a continuous vector function ( ) that satis es  t

_=

@H
(adj oint equation ) (2 3)
:
@x

( )=
 T
@

@x
( transversality condition ) (2 4)
:
T

and that
( 
H x ; u;  ; t

) ( 
H x ;u ; ;t
 
) (2 5)
:

for all admissible controls satisfying the constraints, where H is the Hamiltonian
u

function de ned as

(
H x; u; ; t )= ( F x; u; t )+ T
(
 f x; u; t : ) (2 6)
:

u

is assumed to piecewise continuous, and satis es the constraints.

5
In time dependent problems Pontryagins principle can be used to determine either
a minimum or a maximum of the functional J(u).
Example
Consider the angular motion of a ship which is given by
2
d 

dt
2
+ d

dt
= u; (2 7) :

where the rudder setting u is subject to j j


u 1. Find u to minimise the time
required to change course from = 1, 
@
@t
= 0 when = 0 to = 0, t 
@
@t
= 0 when
t = . The rst order equations are formed by taking
T x1 = ,
 x2 = _ . Then


_ = _1 =
 x x2 ; x1 (0) = 1 ; x1 T ( )=0 (2 8) :

 = _2 =
 x u x2 ; x2 (0) = 0 ; x2 T( )=0 : (2 9) :

Hence we wish to
Z T
min
u 1 dt (2 10)
:
0

subject to equations (2.8) and (2.9). We now form the Hamiltonian function

(
H x1 ; x2 ; u; t; 1 ; 2 ) = 1 + 1( 2) + 2(
 x  u x2 : ) (2 11)
:

The adjoint equations can be found to be

1 _ = @H
=0 (2 12)
:
@x1

_ =
2
@H
= 2 1 : (2 13)
:
@x2

These can be solved to give


1 = A (2 14)
:

2 = +
A
t
Be ; (2 15)
:

6
where A and B are constants to be determined. Applying Pontryagins Principle
we get
1+  
1 x2 + 2 (
 u
 
x2 )1+  
 1 x2 + 2 (
 u

x2 ) (2 16)
:

2 u

( 
u)0 : (2 17)
:

Hence
u

= 1 if 2 > 0 (2 18)
:

u

=1 if 2 < 0: (2 19)
:

From equation (2.15) we can see that 2 can change sign at most once. The
equations for x1 and x2 are now found to be

x1 = ut Ce
t
+ D (2 20)
:

x2 = + u Ce
t
; (2 21)
:

where C and D are constants to be found. There are two possible cases to consider:
u = 1 at = 0, = 1 at = , and = 1 at = 0, = 1 at = . From
t u t T u t u t T

equations (2.15), (2.18) and (2.19) it can be determined that = 1 at = 0 is u t

the correct case, x1 and x2 can now be found in terms of known values. Hence

x1 = + t e
t
f or t  ts

x1 = t e
T t
+ +1 T f or t  ts (2.22)

x2 =1 e
t
f or t  ts

x2 = 1+ e
T t
f or t  ts; (2.23)

from which we can determine the switching time ts by equating the pairs of
equations (2.22) and (2.23) for x1 and x2 at the switching time , giving ts ts =
7
1
2 (
T 1). All that remains is to calculate the nal time T. This is straight forward
and gives = 2
T cosh
1
(
e
1
2 ).

2.2 Singular Intervals

In the special case where the Hamiltonian H is linear in the control vector , it u

is possible for the coecient of the linear term to vanish over a nite interval of
time. This gives rise to a singular interval and Pontryagin's principle gives no
information about the control variable . During this phase an additional condi-
u

tion is needed. The additional condition is a condition for local optimality, which
holds through out the singular interval, and is the second order Clebsch-Legendre
condition stated here without proof. The required condition for a maximum is
" ! !#
@

@u dt
d dH

du
> 0 (2 24)
:

For proof of this see [2], examples can also be found in this reference.
Example
Consider the control strategy that causes the response of the system

x1 t
_( ) = 2( )
x t ;
_( ) = ( )
x2 t u t (2 25)
:

subject to 1 (0) = , 2 (0) = , to minimise


x x

Z
J
1
=2
T 
2
x1 t ( ) + 22( )
x t

dt: (2 26)
:
0

The nal time and the nal states are free, and the controls are constrained by
T

the inequality j ( ) j 1. We now form the Hamiltonian function


u t

(
H x1 ; x2 ; u; t; 1 ; 2 ) = 12 21 + 12 22 +
x x 1 x2 + 2 u: (2 27)
:

8
Applying Pontryagin's principle,
1 2
+ 12 2
+  
+  

1 2
+ 12 2
+  
+ 
(2 28)
2 x1 x2 1 x2 2 u
2 x1 x2 1 x2 2 u; :

2

( u

u )0 : (2 29)
:

Hence

u

=1 if 2 < 0

u = 1 if 2 > 0 (2.30)

Consider the existence of a singular interval [ 1 2], in which case t ;t



2 = 0 for all
[
t" t1 ; t2 ]. Thus, for [ 1 2 ],
t" t ; t

2 _ = 2 = = 0
 ::: ; f or [
t" t1 ; t2 : ] (2 31)
:

The adjoint equations are found to be:

_ =
1
@H
= x1 (2 32)
:
@x1

_ =
2
@H
= x2 1 ; (2 33)
:
@x2

and hence, for [ 1 2 ],


t" t ; t 1 = x2 . Since the nal time = is free and does t T t

not appear explicitly in the Hamiltonian, [ ] =  = 0, therefore H


u u

1 2+1 2+ + =0 [0 ] (2 34)
2 1 2 2
x x 1 x2 2 u f or t" ;T : :

Therefore, for [ 1 2 ],
t" t ; t

0 = 12 21 + 12 22 +
x x 1 x2 ; since 2 =0 ;

0 = 12 x1
2 1 2
2 2x : (2.35)

Thus, equation (2.34) inplies, for [ 1 2 ], t" t ; t

x1 + x2 =0 (2 36)
:

9
or
x1 x2 =0 : (2 37)
:

Di erentiating equation (2.36) and substituting in the state equation gives

_ =
x2 x2 ; (2 38)
:

which implies

u ( )=t

x2 t ; () f or all [
t" t1 ; t2 ; ] (2 39)
:

and, therefore, 1( ), 2( ) [ 1 1]. Similarly, di erentiating equation (2.37), it


x t x t " ;

can be shown that

u

( ) = 2( )
t x t ; f or all [ ]
t" t1 ; t2 ; (2 40)
:

and 1 ( ), 2 ( ) [ 1 1]. Equations (2.36) and (2.37) de ne the locus of a point


x t x t " ;

in the state plane ( x1 ; x2 ) where singular intervalss may exsist. Since the system

1
6 2( )
x t

@@ 
@@ = 1= 2 u x x

@@
@@
@@R -
-1 @I@ 1 ()
x1 t

@@
@@
= 1@= 2
@@ u x x

@
-1

moves away from the origin on the line x1 = 2, this segment cannot form part
x

10
of an optimal trajectory. Suppose 2 ( ) 6= 0, then  t

u

=1 if 2 < 0
u

= 1 if 2 > 0 : (2.41)

Hence,
+
x2 t

( )= t + C1 (2 42)
:

and therefore
+ 2

x1 t ( )= 2 + t
C1 t + C2 : (2 43)
:

Where C1 and C2 are constants to be determined. Since (0) lies in the rst x

quadrant of the state plane ( x1 ; x2 ), the optimal control should be u



= 1
initially. Consider now the possibility of a singular interval. If 
u switches from
-1 to 1 at some time 1, then 2 ( 1 ) = 0. Since
t  t 2 > 0 for < t1 and 2 < 0 for
t > t1 ,
2 <_ 0 : (2 44)
:

However, [ ] =  = 0 and
H u u 2 = 0, which implies

( )= x1 t1
2
( ) + 22( 1) x t
(2 45)
1 t1
2 2( 1)x t
: :

Substituting for 1 ( 1 ) in the adjoint equation gives


 t

(_ ) = [ 1( 1) + 2(21)][( 1)( 1 )
2 t1
x t x t x t x2 t1 ( )] : (2 46)
:
2 1 x t

Thus, for in the fourth quadrant of the (


x x1 ; x2 ) plane,
x1 t1 ( ) ( ) 0 x2 t1
(2 47)
2( 1)
< :
x t

and, hence,
(_ ) 0
2 t1 < ; f or x1 + x2 > 0 ; (2 48)
:

11
whilst
(_ ) 0
2 t1 > ; f or x1 + x2 < 0 : (2 49)
:

Comparing with equation (2.44), it follows that switching is only allowed in the
case when x1 + x2 > 0. There are three possible cases:

1.  1 2
2 + 32

2. 1 2
2 + 32 < <
1 2
2 +4

3.  1 2
2 +4

In case 1, if the initial trajectory segment intersects the line x1 + x2 = 0, it is


not allowed to cross the singular line as switching cannot occur for x1 + x2 < 0.
Hence, the trajectory continues along the singular line until the origin is reached.
In case 2, switching is allowed so that a trajectory can follow a parabola
x1 = 12 22 + (with 0
x C < C  1
2 ) until it reaches the singular line and then travels
along the singular line.
In case 3, as in case 2, or switching so that a trajectory follows the parabola
x1 = 12 22 until it reaches the state origin.
x

12
Possible controls three phase
x2

2
x1=-0.5(x2) +4
1

(1) (2) (3)


2
x1=-0.5(x2) +1.5 x1
0
0 1 2 3 4 5

-1

2
x1=0.5(x2)
-2
2
x1=0.5(x2) +0.5

-3

-4 x1=-x2

Figure 2.1: Demonstrating control solutions 1,2,and 3

13
Chapter 3
Biological Problem

3.1 Notation

t = age
T = nal time
w t ( ) = body weight, (0) = initial weight
w

P w ( ) = total resources available for allocation, p 0

w 0 = energy used in producing one o -spring, 0 = w


w (0)
k
, k a given constant
u t ( ) = control variable
(
b u; w ) = birth rate
 w ( ) = mortality rate
Rt
( )=
l t e 0
(x)dx
survivorship from birth to age t
L t ( )= e
rt
()
l t L (0) = 1
The combining of e
rt
and ( ) into one `state' variable is possible as the rate
l t

of increase of the population is mathematically equivalent to adding a constant


term to the mortality rate .
r 

14
3.2 Biological Problem

The biological problem of resource allocation is considered as an optimal control


problem , where tness is maxismised by the choice of the control variable. Fitness
of a life cycle strategy is measured by it's rate of increase r, de ned by
Z T
1= e
rt
()()
l t b t dt; (3 1):
0

where T is the maximum age of reproduction, [4]. A strategy which maximises


tness is used as this is the most probable result of evolution.
The resources ( ) available to a given organism of size are always subject
P w w

to the competing demands of growth and reproduction. Since ( ) = L t e


rt
( ),
l t

it can be thought of as a single factor which weights reproduction in equation


(3.1) and decreases with time at the rate + ( ). Reproduction decreases in
r  w

value with time due to the population increasing at a rate of and the decrease r

in survival probability by a rate of ( ).  w u denotes the proportion of resources


directed to reproduction, the remainder of the resources are directed to growth.
Using the model of growth and reproduction developed in [6] we have an optimal
control problem in which we choose u to maximise subject to the following
r

constraints.
w_ = (1 u t( )) ( )
P w ; w (0) = specif ied (3 2):

_ = ( + ( ))
L r  w L; L(0) = 1 (3 3):

_= ( )
uP w L
(0) = 0 ( )=1 (3 4)

w 0 ;   T :

_=0
r (3 5):

with 0  u  1. The dot ( ) denotes di erentiation with respect to time. Equation


:

15
(3.1) can now be rewritten as
Z T uP w ( ) =1 (3 6)
0 w 0 Ldt : :

When maximising r directly using Pontryagin's principle we need to de ne the


Hamltonian H and from this we get the equations for the adjoint variables by
di erentiating with respect to the state variables. H is de ned as

= ( ) ( ) ( ) + (1
u t L t P w
( )) ( ) ( + ( )) ( ) (3 7)
H 0
w0 1  u t P w 2 r  w L t : :

3 does not appear in the hamiltonian as _ = 0. The adjoint equations are given
r

by
_ =0
0 ; (3 8):

0
_ = 0 u0 LP
(1 ( )) 0
( )=0 (3 9)
1 2 m L
w 0 u t 1 P ; 1 T :

_ = ( ) u0 P w ( ) ( )=0 (3 10)


2 2 m w
w 0 ; 2 T :

3 _ = ()
2 L t ; 3 (0) = 0 ; 3 T ( )=1 : (3 11)
:

We note that ( ) = + ( ) and


m w r  w m
0
( ) = 0 ( ) where (') denotes di erentia-
w  w

tion with respect to w.

3.3 Two phase solution

In a two phase solution the control u switches instantly from zero to one. The
switching point can be found by setting @H
@u
= 0, which in this case gives
@H

@u
= 0 LP

w 0 1 P =0 : (3 12)
:

The control u is dependent on the relationship between 0 L


w0
, and 1 giving

u

=0 if
0 L

w 0 < 1 (3 13)
:

16
u

=1 if
0 L

w 0 > 1 : (3 14)
:

(A boundary solution maximises the Hamiltonian.) This occurs due to the in-
herent properties of the problem, allowing all resources to be allocated to growth
initially and all resources to reproduction at = . This gives the two phase
t T

strategy.

3.3.1 Finding a two phase solution

Taking a simple case where the functions ( ), and ( ) are both linear the
P w  w

two phase exact solution can be found. Working with the general case where
( )=
P w Aw , ( )=
 w Bw and (0) = 0 25 the equations necessary for solving the
w :

linear case are derived.


Since the solution is two phase, there is only one switching point, which will
be denoted with a subscript of 1. During the initial phase = 0 and so from u

equation (3.2) we have dw


dt
= ( ) and hence we can nd
P w t1 in terms of w1

Z
=
w1 1 (3 15)
t1
w (0) P w ( ) dw: :

Therefore
w1 = 0 25 : e
At1
(3 16)
:

and from equation (3.3) can be found to be


L

R w1 r +(w)
L t ( )= e w(0) P (W )
dw
:

Hence 1, is given by
L

R w1 r +Bw
L1 = e w(0) Aw
dw
: (3 17)
:

17
Now using equation (3.6) we get
Z Z
1=
T P w L t
1 ( ) () = ( 10)
P w L1 T
(r+(w1 ))(t t1 )
=
t1 w0 dt
w t1
e dt

( )  
0( + ( 1)) 1 (3.18)
P w1 L1 (r+(w1 ))(T t1 )
e :
w r  w

This equation gives a relationship between and r w1 . This can be shown graphi-
cally and the optimal strategy determined from this by observation. This reduces
the optimal control problem to a simple static problem of nding w1 to maximise
r . A necessary condition for optimality is that

dw1
dr
= @f

@w1
=
@f

@r
=0 : (3 19)
:

Since we assume @f
@r
does not equal zero, we require @f
@w1
to be equal to zero. Hence
from equation (3.18) we obtain
!0
m1
( )(
0 P w1
T )
t1 e
m1 (T t1 )
+ ( )
P w1

1 e
m1 (T t1 )

1=0 (3 20)
:
m1 m1

where m1 = + ( 1) and
r  w m1
0
= 0 . We now have three equations (3.16, 3.18,


3.20) and three unknowns , r w1 and t1 from which the exact solution can be
determined.

18
Variation of control with time

1.0

control
0.5

0.0
0 5 10 15 20
t - axis

Figure 3.1: Two phase control

Example-linear
The functions are given as
P w ( ) = 0 0702 : w

 w ( ) = 0 01 : w

w (0) = 0 25 :

k = 0 0602
:

T = 100
w 0= w (0)
k

which on substitution into equations (3.16, 3.18, 3.20) give the following equations

w1 = 0 25
: e
0:0702t1
; (3 21)
:

 R w1 
w
0 0702
0 = + 0 01 1 : w
e w(0)
r +0:1w1
0:0702w1 dw
1 e
(r+0:01w1 )(100 t1 )

; (3 22)
:
r 1 : w

 
0 1 0 +0702
:
01 r
:

: w1
w1
(100 t1 ) e
(r+0:01w1 )(100 t1 )
+

!0
( )
P w1

1 (r+0:1w1 )(100 t1 )

1=0 (3.23)
r +01 1 : w
e :

19
One possible solution satisfying all the equations is given by = 0, r w1 = 2 29
:

and 1 = 31 6. This satis es the necessary conditions for a stationary value of the
t :

functional.

3.4 Three phase solution

The three phase solution occurs when 0 L


w0
= 1 for a non-zero length of time.
It is this property which gives rise to the singular arc. The speci cation of the
problem implies that u must be zero initially and one nally, so the singular arc
must occur in the middle of the life history. From [6] we have that the necessary
conditions for a singular arc are dependent on whether mortality and growth are
both increasing or decreasing with age.

1. Case 1 ( mortality and production increasing)

 
0
( 2) 0
w >

 P
0
( 2)
w > r + ( 2)
 w

 0
 P (w2 )
r +(w2 )
< 1
 0
0
 P
m
0
m
< 1 (Clebsch Legendre condition)

2. Case 2 ( mortality and production decreasing)

 
0
( 2) 0
w <

 0
 P (w2 )
r +(w2 )
> 1
 0 0
 P
0
(r+)
> 1 (Clebsch Legendre condition)

Note The dash (0 ) denotes di erentiation with respect to w2 which is the weight
at the second switching point.
20
3.4.1 Finding a three phase solution

A general three phase solution is looked at, in the simple case where the functions
( ) and ( ) are any general linear function. The rst and second switching
P w  w

points are denoted by the subscripts 1 and 2 respectively. Equation (3.16) relating
w1 and t1 remains valid as does equation (3.17) relating w1 and 1. In the three
L

phase optimal control problem, it is assumed that 1 = 0 L


w0
for a non- zero period
of time. During this phase @H
@u
= 0 and the trajectory is a singular arc. Since
1 = 0 L
w0
during the singular arc we can di erentiate to get

_ = 0_  L
(3 24)
1
0 w
: :

Substituting equation (3.9) into this equation and using equation (3.3) as well as
the relationship 1 = 0 L
w0
gives
( 0
( ))
2 = (3 25)
0 P m w

0 0
w m
; :

where (0 ) denotes di erentiation with respect to . w 2 can now be found during


the nal phase from equation (3.10) giving

= 1 m(w2 )(T t2 )
 ( 2)
0 P
0
w
(3 26)
2 e
0 ( 2)
w m w
; :

which on substitution into equation (3.9) gives


!0
1 w 0= m2
( ) ( )(
0 P w2
L T T t )+ P w2 ( ) [ () L t L T ( )]: (3 27)
:
0 m2 m2

At the start of the nal phase 1 w 0


0
= ( )L T e
m(w2 )(T t2 )
which on substituting into
equation (3.27) gives the criterion equation
!0
m2
0 P w2 ( )( T t2 e) m2 (T t2 )
+ P w2( ) 
1 e
m2 (T t2 )

1=0 (3 28)
:
m2 m2

21
where m2 = + ( 2). The next step is to equate equations (3.25) and (3.26), the
r  w

trajectories of 2 during the singular arc and nal phase respectively at = 2. t t

These give a new equation between and T t2 as


1 0
( 2)
2 = (3 29)
P w
T t
( 2) ( 2)
0
P w m w
: :

Before the singular arc analogue of equation (3.18) can be de ned it is necessary
to de ne the optimal control during the singular arc. This is de ned as
u

 0 0 0
P (w) m(w ) m(w ) P (w) m(w )
0 P (w ) 0
m (w ) m (w )
u =  0 0 ; (3 30)
:
P (w)
0
m (w )
m(w )
1
which on substitution into equation (3.2) gives
 0 
P (w)
m (w )
0
m(w )
m w ( ) P
_=
w  0 0 = ( G w; r : ) (3 31)
:
P (w)
m (w )
0
m(w )
1
Hence
Z Z
t2
=
w2 1 (3 32)
t1
dt
w1 (
G w; r ) dw :

Using equation (3.30) to specify u during the singular arc, w2 can be found and
having found w2 an expression specifying L2 can be found. From these it is
possible to construct the singular arc analogue of equation (3.18), given by
R w1 r +(w)
R w2 r +(w)
Z
1=
w2 ( ( )
P w G w; r ( )) e w(0) P (w)
dw
w1 G(w;r )
dw

+
w1 w G w; r0 ( ) dw

R w1 r +(w)
R w2 r +(w)
( )
P w2 e w(0) P (w)
dw
w1 G(w;r )
dw

1 (r+(w2 ))(T t2 )

(3.33)
w 0( + ( 2)
r  w
e

We now have ve equations (3.16), (3.28), (3.29), (3.32) and (3.33) and ve un-
knowns , r w1 w2 , t1 and t2 from which the exact solution can be determined.

22
Variation of control three phase

1.0

0.75

control
0.5

0.25

0.0
0 5 10 15 20
t - axis

Figure 3.2: Three phase control


Example-linear
The functions are given as
P w ( ) = 0 0702 : w

 w ( ) = 0 01 : w

w (0) = 0 25 :

k = 0 0602
:

T = 100
w 0= w (0)
k

which on substitution into equations (3.16), (3.28), (3.29), (3.32), (3.33) give the
following equations
w1 = 0 25
: e
0:0702t1
; (3 34)
:

 
0 1 0 +0702
:
01
r
: w1

: w1
(100 t1 ) e
(r+0:01w1 )(100 t1 )
+

!0
( )
P w1

1 (r+0:1w1 )(100 t1 )

1=0 (3.35)
r +01 1 : w
e ;

23
100 1 0 0702
= 0 0702 :
(3 36)
2 0 01
t2 :
: w :
Z Z
t2
=
w2 1 ( ) = 0 005 (3 37)
t1
dt
w1 (
G w; r ) dw where G w; r : w :

R w1 r +0:01w
R w2 r +0:01w
Z
1=
w2 (0 0702 : w 0 005 )
: w e w(0) 0:0702w dw w1 0:005w dw
+
w1 w 0 0 005
: w
dw

R w1 r +0:01w
R w2 r +0:01w
0 0702
: we w(0) 0:0702w dw w1 G(w;r )
dw

1 (r+0:01w2 )(100 t2 )

(3.38)
w 0( + 0 01 2 )
r : w
e

One possible solution, is given by = 0, r w1 = 1 81, : w2 = 2 58,


: t1 = 28 20 and
:

t1 = 61 28. This solution satis es the necessary conditions for a stationary value
:

of the functional.

24
Chapter 4
Development of Numerical
Method
The development of the numerical method begins by deriving numerical schemes
for solving the state equations for an arbitary control vector . The adjoint
u

equations are then solved for this arbitary control vector using similar numerical
u

methods. Since the equations for the adjoint variables are dependent on the
state equations they are solved for the same step size to avoid having to use
interpolation. The problem solution obtained at this point will not be an optimal
solution. The projected gradient and conditional gradient methods are used to
nd the optimal solution.

4.1 Solving the state equations

The numerical scheme begins with nding solutions to the state equations, for
a given control . The equations for the state variables are solved rst as the
u

adjoint equations are dependent upon them; these equations are solved from = 0
t

25
to a nal time = .t T

We commence with the state equation for , equation (3.2), which is inde-
w

pendent. The remaining state equations are solved simultaneously. The function
( ) given in equation (3.2) is assumed to be problem speci c and user de ned.
P w

A simple trapezium rule discretisation is applied to equation (3.2) for , giving w

wj +1 = wj + 2 ((1
h
) ( ) + (1
uj P wj ) (
uj +1 P wj +1 ))
; (4 1)
:

where is the step length ( =


h h T =n where is the nal time and is the number
T n

of steps), and j denotes the number of steps taken from = 0. t

It can be seen that ( P wj +1 ) cannot be evaluated as wj +1 is unknown and so


this form of simple discretisation cannot be used. Instead it is used to form an
iterative method, such that

n+1
wj +1 = wj + 2 (1
h
) ( )) + 2 (1
uj P wj
h n
) (
uj +1 P wj +1 ) (4 2)
:

where w0 is known and where 1


wj +1 = wj at each step.
The iteration process is stopped when n+1
wj +1
n
wj +1  tol at each step. Having
developed a numerical scheme to nd , we must now show that the method
w

converges. It is assumed that satis es the Lipschitz condition.


P

j P (w ) ( ) j
P y A jw y j (4 3)
:

Then for convergence it is necessary that

j wj +1 n+1
wj +1 j K j wj +1 n
wj +1 j K < 1 (4 4)
:

Using equation (4.2) and (4.4), we obtained

j= j
2 (1 )j j ( ) ( )j (4 5)
n+1 h
j wj +1 wj +1 uj +1 P wj +1
n
P wj +1 ; :

26
which from equation (4.3) gives

2 (1 )j (4 6)
n+1 h
j wj +1 wj +1 j j uj +1 A j wj +1 n
wj +1 j: :

Hence for convergence

2 (1 )j 1 (4 7)
h
j uj +1 A < :

which gives a bound on the size of h as h  2


A
where A is the Lipshitz constant.
The Lipshitz constant A is dependent on the function ( ). If ( ) is a linear P w P w

function such that ( ) =


P w w , then

j P (w ) P y ( )j=j w y j

=j jjw y j (4.8)

which gives the Lipshitz constant A as . If ( ) is a non-linear function then P w

the Mean value theorem is used to determine A.

j P (w ) P y ( ) jj 0 ( )( P  w y )j
 M ax j P
0
( )jj
 w y j; (4.9)

giving a bound on the Lipshitz constant A as A M ax jP


0
( ) j.


We now continue by developing a numerical scheme to solve the remaining


state equations, equations (3.3), (3.4) and (3.5) simultaneously. The equations
are again discretised using the Trapezium rule such that
1 2 ( + ( ))h n
= =1 (4 10)
r  wj

1 + 2 ( + ( +1 ))
Lj +1 Lj ; L0 :
h n
r  wj

j +1 = + 2 0(
j
w
h
( ) +
uj P wj Lj (
uj +1 P wj +1 Lj +1 ; ) ) 0 =0 : (4 11)
:

Since r is an unknown constant, equations (4.9) and (4.10) can not be solved
without a value for r being speci ed. This is done by starting a sequence of
27
iterations for the value of r with two initial guesses and evaluating L and 

respectively for each guess. If the end condition ( ) = 1 is not satis ed, a  T

Secant method is used to update the value of r, which is then used to recalculate
the value of and until the end condition on is satis ed. r is updated by
L  

n+1
= n ( )(
g r
n
r
n
r
n 1
) (4 12)
r r
( )
g r
n
(
g r
n 1 ) ; :

stopping only when r


n+1
r
n
is small enough. The function ( ) is de ned to g r
n

force the end condition ( ) = 1; hence


 T

g r ( )= ( ) 1
n
 T : (4 13)
:

The solutions found for , , and are now used in determining the nu-
w L  r

merical solutions of the adjoint equations. Due to the interdependence of the


equations, the equations for 0 ; 2 and 3 are solved simultaneously.

4.2 Solving the adjoint equations

The adjoint equations are equations (3.8), (3.9), (3.10) and (3.11). To begin
solving the adjoint equations f 0 ,  2 and 3 the end conditions are used.
The numerical solution is found using the same technique as that used for
nding , and and so only the outline is given here. Two initial estimates of
r L 

the constant 0 are obtained and equation (3.10) for 2 and equation (3.11) for
3 are solved for these values, with only the conditions at = being used. The t T

end condition on 3 for = 0 is used in the secant update to give


t

n+1
= n
n
3 (0)( n
0
n
0 )1
(4 14)
3 (0) (0)
0 0 :
n n 1
 3

28
Equations (3.10) and (3.11) for 2 and 3 are again solved by using a trapezium
discretisation, working backwards in time from the nal time = , so that t T

(1 h
( + ( ))) 2 +1 + 2 00 ( ( ) +h
( ))
= (4 15)
r  wj +1  j uj P wj uj +1 P wj +1
2 w
(1 + 2 ( + ( ))
2j :
h
r  wj

3j = 2( h
2j Lj + 2j +1 Lj +1 )+ 3j +1 : (4 16)
:

It is now possible to solve equation (3.9) for 1 ; this is again done with a 

trapezium discretisation, stepping back in time from = to = 0. This gives t T t

(1 + 2 (1 +1 ) 0 ( +1 ))
h
=
uj P wj 1j +1

(1 2 (1 ) 0 ( ))
1j
h
uj P wj

h (2j 
0
( ) +
wj Lj 2j +1  ( +1 )
0
wj Lj +1 )
2 (1 2 (1 h
uj P) ( )) 0
wj

+2 0h0 (
uj Lj P ( )+
0
wj (
uj +1 Lj +1 P
0
wj +1 ) (4.17)
w (1 2 (1 h
uj P) ( )) 0
wj

from which 1 can be found.

4.3 Ways of nding optimal solutions

A numerical method for nding the optimal control u, and hence the optimal
solutions to equations (3.2-3.5) and (3.8-3.11) is needed, assuming such an optimal
exists. To optimise the value of the control vector u, starting from an arbitrary
control vector, two methods have been tried, the projected gradient method, and
the conditional gradient method. The basic algorithms were taken from [1] and
adapted as necessary to suit this problem.

29
4.3.1 Projected gradient method

A basic outline to this method is shown in the owchart in Fig 4.1 The new
approximation to the optimal control u

is chosen such that

unew = uold + step


@H

@uold
; (4 18)
:

where step is the step size, and @H


@uo ld
is speci ed by equation (3.12). If unew is
greater than one then it is set to the maximum value of one; similarly if unew is
less than zero then it is set to zero. If the value of r (which we wish to maximise)
has not increased, then the step size is halved and the process of nding a new
control u is repeated until r has increased in value. The adjoint variables are
then evaluated for the new control, as is @H
@unew
, which is used to determine if the
solution has converged to the optimal control u. A new variable ~ is evaluated u

such that ~ equals zero if


u
@H
@unew
is less than zero and one otherwise, where @H
@unew
is
evaluated using equation (3.12). Convergence is then measured by the inner
product
<
@H

@u
(~
; u u ) >; (4 19)
:

which is determined numerically as


X
n
h
@H

@u
(~uk uk :) (4 20)
:
k =1 uk

This maximises the rst variationof the functional over all possible choices of . u

If this is suciently small then the process is said to have converged and the
optimal control u has been determined; otherwise the step size is set to one again
and the whole process repeated until the optimal value of u is determined.

30
4.3.2 Conditional gradient method

Again the basic outline to the method is shown using a owchart, see Fig 4.2.
The idea is to generate a sequence of possible control vectors u, for which the
values of the functional (which we wish to maximise) is non-decreasing.
r u is
an approximation to the optimal control , with solutions , , , and to the
u r w l 

state equations and solutions 0 , 1 ,


  2 and 3 to the adjoint equations. Then
new approximation to u is made as

unew = (1 )
s uold + ~ su; (4 21)
:

where s = step size and ~ is obtained as follows


u

~=1
u if
@H

@unew
> 0

~=0
u otherwise;

where @H
@unew
is the functional gradient speci ed by equation (3.12). This selection
maximises the rst variation <
@H
@unew
;u~ unew > of the functional over all possible
choices of ~ and is used to determine when the optimal has been found. This is
u

the same convergence criterion as used for the Projected gradient method and is
given by equation (4.20).

31
 
 Start 
- set ?
6 uold = unew ; rold = rnew ;
@H
@uold
= @H
@unew

?
step = 1
?
calculate unew

unew = + uold step
@H
@uold
6
    Is?XXXXX
= 1 X XXX 00 1 0 X
XXX - = 0

XXXorXbetween?  
unew < : ; or > :


unew unew


X 0 0   1 0
=
?unew
:
unew
unew :

? - ?
?
solve state equations
     ?XXXXX
XXXX Is XXXX no -
XXXXX 
rnew > rold step = step/2
yes
?
solve adjoint equations
?
calculate @H
@unew

     ?XXXXX
XXX Is XXXX no-
~ = 1 yesX
XXXXX  ~ = 0
0
u @H u
>
@unew

? -? ?
calculate = c <
@H
@unew
;u ~ unew >

   ?XXXX
 XXXXX
 XXX Is
noX
XXXXX  c tol

yes
?

= u unew

 ? 
End  

Figure 4.1: Projected Gradient Algorithim

32
 
 Start 
?
set =
u un = up ; rold = rnew

- ?
6 step = 1, = up u; rold = rnew

?
calculate u  6
u = (1 ) +
step up stepun

?
solve state equations
    X?XXXX
XXXX Is XXXX no -
XXXXX 
rnew > rold step = step/2
yes
?
solve adjoint equations
?
calculate @H
@u

    X?XXXX
XXX Is XXXX no-
= 1yesX
XXXXX 
0 =0
un @H un
>
@u

? -? ?
calculate = c <
@H
@u
; un up >

  ?XXXXX
 XXX
noX Is XXXX
XXXXX 
 c tol

yes
?

= u u

 ? 
End 

Figure 4.2: Conditional Gradient Algorithim

33
Chapter 5
Numerical Results
In this chapter the results from a series of test problems are presented. Due to the
diculty in calculating analytical solutions even in the case of linear functions for
P w ( ) and ( ) the analytical solutions have only been fully calculated for a linear
 w

test problem. Tests are made which look at the sensitivity of the method to the
initial approximation of , the e ect on the solution of the initial arbitrary control
r

u and the e ect of changing the length of the step taken in the gradient direction
on the rate of convergence and accuracy of the solution for both optimisation
methods. The eciency of the two optimisation methods is compared by looking
at the total number of inner iterations (number of times is calculated) and the
r

number of outer iterations (number of times the new control is recalculated if the
convergence criterion is not satis ed).

5.1 The E ects of Initial Data

Consideration is given rstly to what initial data the method requires and then to
it's possible e ect on the methods numerical solutions. The initial data required

34
are

i) w (0), weight of at = 0
w t

ii) t = T nal time

iii) N The number of steps ( = h T =N )

iv) Two initial approximations to the value of r

v) Two initial approximations to the value of 0

vi) The switching point (zero to one) for the arbitrary initial control

which is not dependent on the optimisation method used. Points iv and vi are of
primary interest.

5.2 Testing Optimality

The numerical method developed solves eight rst order di erential equations
which are related in such away that seven of the eight di erential equations are
e ected either directly or indirectly by the value of . This makes the method r

sensitive to the value of to the extent that if the initial approximation is not
r

close enough to the method provides a false solution. This can be detected by
r

using the condition 0 = 2 (0) obtained by equating


 0 times equation (3.6) and
rearranging and integrating times equation (3.10) giving
L

Z Z
=
T uP w L ( ) =
T 0
(5 1)
0 0
0 w0 dt
0
2 mL 2 L dt: :

Integrating by parts gives


Z T
0 = 2 mL + 2 L dt
0
2 Lj0T : (5 2)
:
0

35
Using equation(3.3) gives
Z T
0 = 2 mL 2 mLdt + 2 (0)
 : (5 3)
:
0

Hence 0 = 2 (0). This condition is only satis ed when the solution is an optimal


solution. Since is calculated using the Secant method the initial approximations
r

must be easonably close to . This will hopefully not present a problem when
r

dealing with real data.

5.3 Convergence

In this section we look at the convergence criteria used throughout the numerical
method and the number of steps . The convergence criteria used in evaluating
N

the state equations begins by looking at that used in the evaluation of . The w

convergence of w is pointwise (when two succesive approximations to wj are


within the given tolerance then the solution at the point wj is accepted.) The
tolerance used is 10 6 . The convergence criteria for the remaining state equations
r , , and is dependant on the convergence of the secant iteration for this uses
L  r

a tolerance of 10 6 again. The adjoint equations for 0 ,  2 and 3 depend on


the convergence of the secant iteration for 0 which again has a tolerance of
10 6 . The solution of 1 , a backward time steping scheme using a trapezium rule


discretisation, convergences with order ( 2). h

The control calculated from both optimisation methods uses a convergence


u

criteria, dependent on the inner product speci ed by equation (4.19). The magni-
tude of this inner product dictates the convergence of the control . The method u

requires the inner product to be less than 0 005 for convergence. :

36
Table 5.1 shows the variation in w1 at the rst switching point and the varia-
tion in w2 at the second switching point, as the number of steps N is increased.
The solutions can be seen to converge to two decimal places when N = 200.

N w1 w2 t1 t2

500 2.0395 2.0413 30.0 74.6000


1000 2.0466 2.0484 30.0 74.4000
1500 2.0490 2.0508 30.0 74.2667
2000 2.0502 2.0520 30.0 74.2500
2500 2.0509 2.0527 30.0 74.2800
3000 2.0514 2.0532 30.0 74.2667
Table 5.1: The e ect of N on convergence of the Projected gradient method

5.4 Moving the Initial Switching Point

The switching point is the time at which the arbitrary initial control switches
ts u

from zero to one. Hence

u =0 if t < ts

u =1 if t > ts : (5.4)

The e ect of the position of ts on the solutions found using both the Projected
and Conditional gradient methods is considered by looking at the changes in r

and 0 as well as the number of iterations necessary for the methods to converge.
The problem used for this evaluation is the linear test problem speci ed by
the functions ( ) and ( ), the initial value (0), a constant and the nal
P w  w w k

37
time = , given by
t T

P w ( ) = 0 0702
: w

 w ( ) = 0 01
: w

w (0) = 0 25
:

k = 0 0602
:

T = 100.
The analytical solution has already been calculated (chapter 3) for this problem
with both two and three phase solutions being found.
Projected Gradient Method
u switch point no of iterations r 0 = 2 (0)


t= 5 266 -0.0009 yes 0.0163


t=10 263 -0.0007 yes 0.0166
t=15 254 -0.0005 yes 0.0169
t=20 224 -0.0004 yes 0.0172
t=25 145 -0.0002 yes 0.0175
t=30 1 0.0 yes 0.0176
t=35 19 -0.0002 yes 0.0173
t=40 104 -0.0003 yes 0.0170
Table 5.2: The e ect of moving the switching point for the Projected gradient
method, N = 2000

The results in table 5.2 show that moving the switching point ( ) has a quite
ts

dramatic e ect on the number of iterations required for the method to converge.
In all cases where a three phase solution is optimal, the projected gradient method

38
will nd this solution and has done so for this example. From chapter three we
know that the rst switching point is at = 28 20 for this problem, and so the
t :

rapid convergence of the method when = 30 is only to be expected. The changes


ts

in the values of and


r 0 are less marked and indicate the method converging to a
stable solution at = 0. The solutions are identical to two decimal places which is
r

all that can be expected when N = 2000. This variation in the results is e ected
by the tolerance used for calculating the control variable although making this
stricter to reduce the variation in the solution is not viable due to the resulting
large increase in the number of iterations.
Conditional Gradient Method
u switch point no iterations r 0 = 2 (0)


t=30 1 -0.0001 yes 0.0176


t=32.5 1 0.0000 yes 0.0174
Table 5.3: The e ect of moving the switching point for the Conditional gradient
method N = 2000

The conditional gradient method had considerable diculty in solving this


problem, with solutions only being achieved for the two switching times in table
5.3. The solutions generated were two-phase `bang-bang' type solutions. From
the analytical solution in chapter three we know that the actual switching point
for the two phase solution is = 31 60, so the method's rapid convergence at the
ts :

above switching times, which are very close to the exact solution, is not surpris-
ing. It is however disappointing that the method fails to produce a solution if
the switching point is moved elsewhere. The reason for this needs further inves-
tigation to determine it's cause, which would appear to be either the method's
39
inability to cope with problems that have a possible three phase solution con-
taining a singular arc, which would render the method almost useless, or the
magnitude of causing some type of error propogation..
T

5.5 Changing the Step Length in the Gradient

Direction

The step length in the gradient direction directly concerns the optimisation meth-
ods. The aim is to try and improve the rate of convergence without loosing accu-
racy. The projected gradient method only was considered in detail. When using
the original step length of one the method appeared to be unusable, requiring
over eight hundred iterations to converge to a solution when the switching point
ts was not very close to the exact solution. This caused errors to e ect the solu-
tion resulting in a sub-optimal solution being found. Increases in the step length
reduced the number of iterations required for convergence and the results are
shown in table 5.4 for a variety os switching times as it would be most unusual
for the initial guess to be almost identical to the exact solution.
It appears from the results in table 5.4 that very large increases in step length
are possible without the solution being e ected, while reducing the number of
iterations required for convergence. However caution must be advised, as the step
length possible is dependent on the size of the functional gradient, which for this
example is very small. A step length of four would, however, be a good starting
point for most problems. The development of a scaling method to determine the
length of the step size based on the magnitude of the functional gradient would

40
be a useful addition to the numerical method and is an area of future work.

41
u switch point no iterations step length r 0 = 2 (0)


t=10 263 4 -0.0007 yes 0.0166


t=20 224 4 -0.0004 yes 0.0172
t=30 1 4 0.0 yes 0.0176
t=40 104 4 -0.0003 yes 0.0170
t=10 131 8 -0.0007 yes 0.0166
t=20 112 8 -0.0004 yes 0.0172
t=30 1 8 0.0 yes 0.0176
t=40 51 8 -0.0003 yes 0.0170
t=10 65 16 -0.0007 yes 0.0166
t=20 55 16 -0.0004 yes 0.0172
t=30 1 16 0.0 yes 0.0176
t=40 25 16 -0.0003 yes 0.0170
t=10 32 32 -0.0007 yes 0.0166
t=20 27 32 -0.0004 yes 0.0172
t=30 1 32 0.0 yes 0.0176
t=40 12 32 -0.0003 yes 0.0170

Table 5.4: Projected Gradient method variable initial and step length in the
u

gradient direction

42
Chapter 6
Conclusions
This project set out to develop a numerical method that was capable of solving
the problem of nding optimal life cycle strategies for a given organism using the
model developed in [6]. The numerical method developed here provides a solution
to this problem.
The basic numerical method for solving di erential equations (3.2-3.5) and
(3.8-3.11) is simple in concept and design and could easily be extended to deal
with further constraints in a more complex model. The two optimisation meth-
ods considered, the projected gradient and conditional gradient, have individual
problems which need further consideration. The projected gradient method is
very slow to converge if the arbitrary initial control has it's switching point away
from the exact switching point. The conditional gradient method does not cope
very well with problems whose solution contains a singular arc, either failing to
converge or nding an alternative two phase solution if the switching point is
chosen to be in almost the exact position.
The linear problem looked at in chapter ve converges to a slightly di erent

43
solution for each switching point speci ed using the projected gradient method
with a maximum solution obtaining the value of = 0 when the switching point
r

is chosen to be almost the exact solution. There are a number of possible reasons
why this may happen, it could be an inherent property of the problem caused by
the exact solution containing an almost at basin around the minimum causing
the numerical method to converge to a series of sub-optimal solutions that are
very close to the optimal as can be observed in table 5.2 and table 5.4. The most
likely alternative is that the errors are a result of the convergence criteria not
being strict enough on the outer iteration loop. The reason for not making these
stricter lies in the number of iterations necessary for convergence being excessive
if the tolerance used is less than 0 005.
:

It would be interesting to test the method with some real data and compare
the strategies predicted by the numerical method and the observed strategies
used by the organism. This would not only provide a means of validating the
numerical method fully, but would also validate thoroghly the model developed
in [6].

44
Acknowledgements

I would like to thank my project tutor Dr N. K. Nichols for her assistance with this
work. I would also like to thank all other members of the mathematics department
who have helped in the preparation of this work, and without who's help it would
never have been completed. I thank SERC for their nancial support.

45
Bibliography
[1] T. P. Andrews, N. K. Nichols, Z. Xu
The form of optimal controllers for Tidal-Power-Generation schemes
Numerical Analysis report 8/90 Reading University

[2] D. J. Bell, P. A. Cook and N. Munro


Design of Modern Control Systems Peter Peregrins Ltd 1982

[3] R. L. Burden and J. D. Faires


Numerical Analysis Fourth edition PWS Kent Publishing Company

[4] B. Charlsworth
Evolution in age-structured Populations 1980 Cambridge University
press

[5] S. Lyle, N. K. Nichols


Numerical methods for Optimal control problems with state con-
straints Numerical Analysis report 8/91 Reading University

[6] N. Perrin, R. M. Sibly, N. K. Nichols.


Optimal growth strategies when mortality and production rate are size
dependent. Numerical Analysis report 12/92 Reading University

46

You might also like