Optimal Control Theory With Aerospace Applications
Optimal Control Theory With Aerospace Applications
net/publication/215741719
CITATIONS READS
7 1,820
4 authors, including:
Some of the authors of this publication are also working on these related projects:
Optimal control with constraints applied to biological and biomedical models View project
Necessary Conditions for Optimal Control Problems with State Constraints: Theory and Applications View project
All content following this page was uploaded by Munnujahan Ara on 04 May 2014.
ABSTRACT
Control theory is one of the most important mathematical milestones in present century. At present there are
many branches of science and technology in which control theory plays a central role and faces fascinating
challenges. In some cases, one expects to solve the problem by means of technologies development that will make
possible to implement more sophisticated control mechanisms. In this study we have briefly mentioned some of the
fields in which these challenges are present. Our main objectives are to investigate some aspects of calculus of
variations and the control theory as well as their scopes and applications. We have studied here the applications of
control theory in landing a space vehicle for optimal controlling the fuel.
1. INTRODUCTION
The Control theory is used in almost every fields of the modern sciences. The optimal control theory is a
branch of dynamic optimization as well as it is a generalization of calculus of variations. See [1], [4], [5], [6], [11]
and [14] for more details as well as for the history of control theory. The theory of optimization continues to be an
area of active research not only for mathematicians but also for engineers and thus is an indication both of the
inherent beauty of the subject and of its relevance to modern developments in engineering, science, industry and
commerce. The optimal control is now playing a central role in many engineering applications, specially in the
systems and control engineering such as robotics and aeronautics (see for examples [10] and [13]); in the life
sciences such as sustainable forest management (see for example [2]); in the mathematical biology and medicine
such as modeling and optimal controlling the infectious diseases (see for examples [8] and [9]). In the past few
decades, there has been an overwhelming demand for the development of the technology enabling successful
applications of control theory in aerospace engineering. In this paper our aim is to investigate the optimal control
theory and some of its applications specially in controlling the fuel while landing of a space vehicle.
at a slightly different time t1 t . The end conditions give xi* t1 t xi t1 t xi , i 1, 2
As usual in variation arguments we are in the first instance interested only in first order effects and from the
conditions we deduce that xi t1 xi* t1 t 0 , i 1,2
If we now use the state equations we obtain
xi t1 u i t1 t , where ui t1 denotes ui x1 t1 , x2 t1 , u t1 .
* * *
To simplify the notation we will let u i denoted by ui x1* t1 , x2* t1 , u* t1 and we adopt the same
B a k u , A z e r b a i j a n | 349
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
t
1
u u u
0 x1 0 x2 0 u dt u0 t1 t 0 u
t0 x1 x2 u
2
The derivatives in the integrand are evaluated on the optimal trajectory. Let J denotes the first variation.
*
If u is optimal it is necessary that the first variation J is zero. So
t1
J = u0 x1 u0 x2 u0 udt u0 t1 t 0
t0 x1 x 2 u
on an optimal path for variations.
The partial derivatives u , x1 , x 2 are not independent here, they are linked by the state equations.
These are because of the constrained optimal control problems dealt in many literatures (see for examples [4], [5],
[11] and [14]) . In this case, we simply need to introduce two Lagrange multipliers 1 t and 2 t . We have
chosen them to be time-dependent. Now consider pair of integrals
t1
They are both zero because the state equations must be satisfied. If we now let u* be optimal and we
calculate the first variation
i 0 Since i 0 for all.
Then a straight forward calculation is given by
t1
u u u d
i i t i x1 i x 2 i u xi dt
t0 x1 x 2 u dt
t1 1 t
d
Now
i t xi dt i t xi tt10 ixi dt
t0
dt t0
t1
= ui t1 i t1 t ixi dt
t0
ixi dt ui t1 i t1 t 0
t0
J 0
The condition that can now be replaced by the condition that
J 1 2 0
On substituting for J , 1 and 2 and rearranging the terms we obtain,
t1
u 0 u1 u
x x 1 1 2 2 1 +
t0 1 x1 x1
t1
u 0 u u
x 2 1 1 2 2 2 dt +
t0 x 2 x 2 x 2
t1
u0 u1 u
u u 1 2 2 dt
t0
u u
u t u t t u t t t 0
0 1 1 1 1 1 1 1 1 1
350 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
for admissible variations u , x1 , x 2 where as usual, the derivatives are evaluated on the optimal path.
The multipliers 1 , 2 are at our disposal and if we choose them to satisfy the equations
H
i , i 1, 2
x i
Then the condition no longer involves x1 and x2 . It becomes
t1
H
u udt H t t 0 for allowed variations.
t0
1
Now we consider the variations u * u for which t 0 ; that is the corresponding solutions for x arrive
at x at t t then our condition becomes
t1
H
u udt 0 for all admissible u.
t0
From this we can deduce that H 0 at every point on an optimal trajectory. Furthermore we observe that
u
if we allow variations for which t 0 . We must still have H 0 at every point. So that it is also necessary
u
that H t1 0 at the end-point of an optimal trajectory. Thus a necessary condition for optimality is that H 0
u
at each point on the optimal path and H 0 at t t1 on the optimal path, where H u 0 1u1 2 u 2
and the function i satisfies the equations
H .
i
x i
These equations are called the co-state equations and H is sometimes referred to as the Hamiltonian.
B a k u , A z e r b a i j a n | 351
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
The thrust cannot be negative or arbitrarily large 0 u t R , for some R 0 .
We have an optimization problem if we try to land the space vehicle minimizing the amount of combustible
t
m0 mt k u t dt = J u
0
H 1 x2 2 u 2 u
is to be maximized as a function of u . Now we can write u in the form │ u │ sgn u and hence expression
H in the form S 2 sgn u 1 .
There are three possibilities for sgn S :
(i) If │ 2 │<1 then sgn S 0 , so H will be maximized by u 0 ,
(ii) If │ 2 │>1 then sgn S sgn 2 sgn u , so H will be maximized by u sgn 2
(iii) If │ 2 │=1 then S sgn 2 sgn u 1 in which case
2 for sgn u sgn 2
S
0 for sgn u sgn 2
Thus we are forced to choose sgn u sgn 2 and we find that the control is not completely determined;
its sign is known but its magnitude is indeterminate. We can only say that u v t sgn where 0 v t 1 .
Thus the control satisfying the Pontryagin maximum principle [10] can be written,
0 if 2 1
sgn if 2 1
* 2
u
v t sgn 2 if 2 1
0 v t 1
The co-state variables are found to be 1 A, 2 B At . │ 2 │ 1 only at isolated times
(since A 0) then u will be indeterminate at isolated instants as it switches between -1 and 0 or 1 and 0 . If
A 0 and │ B │=1 then u * is indeterminate for all t ; the control is singular.
First let us consider the non-singular controls that maximize the Hamiltonian H.
Since 2 B At and A 0 , u * can take only the values 1, 1 and 0. The corresponding trajectories
are two families of parabolas x22 2u* x1 k , u * 1 and a family of straight lines x 2 l corresponding to
u* 0 . Note that when u* 0 we have x1 x 2 , x 2 0 so there is a line of singularities on x 2 0 . This
*
means that no optimal control can end with u 0 . Thus the only non-singular control sequences are
1, 0,1 , 0,1 , 1 and 1, 0, 1 , 0, 1 , 1 (1)
since 2 is linear in t .
Unfortunately we cannot construct an optimal solution from a general initial point using these control
sequences. Let us calculate the fuel consumed in going from an initial point 1, 2 to 0,0 using any admissible
control.
t
On any path x 2 u , │ u │ 1 so x 2 2 u t dt
0
t1
352 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
If we find a u t such that the corresponding value of J is │ 2 │, then this must be optimal. We can show
that there are some initial states for which there is an infinite number of fuel-optimal controls and other states for
which there is no fuel-optimal control.
Let us divide the phase plane as shown in Fig. 1(b) and we observe that O is the half-parabola
2 2
x 2x1, x2 0
2 and similarly O is the half-parabola x 2x1, x2 0.
2 The region R1 lies above O
and x2 0, x1 0 . It includes x2 0, x1 0 and excludes O . The region R 2 is between O and
x2 0, x1 0 . It includes O and excludes x2 0, x1 0 .
Now it is easy to show that
(a) For ( 1 , 2 ) in R1 or R3 there is no optimal control.
(b) For ( 1 , 2 ) in R 2 or R 4 there are infinitely many optimal controls.
To prove (a) we consider ( 1 , 2 ) in R3 . Consider first the non-singular controls that maximize H . They
are listed in (1) to get to O we need the control sequence 1,0, 1 . The switch from u * 1 or u* 0 must
take place at a point lying in R 2 with x 2 0 and since u * 1 at the start we have x 1 , so the time taken
to get from x 2 2 to x 2 is t 1 │ 2 │ . The control is then switched to u* 0 and the truck drifts
uncontrolled (consuming no fuel) along x 2 until O is reached. Then u * is switched to 1 and t 2
say, and the system gets to o with x 2 1 so t1 t 2 0
1 2 t1
No control sequence 1,0, 1 can give J its known minimum value. Singular controls arise when
2 1 or 1 , not just an isolated instant but for a time interval. This will happen if A 0, B 1 or 1 . Such
controls, which are of the form v t sgn B , 0 v t 1 , cannot change the sign. This means that no initial state
in R3 can be driven to the origin by the singular control that maximizes H . To see this, note that in R3 the x 2 -
coordinate is negative that to drive the system closer to the origin we need u 0 0 . Since the singular control
cannot change sign, x 2 increases for all t and the system is driven infinity. It is impossible to reach O from R3
using a singular control. Thus there is no optimal control for 1 , 2 in R3 .
To prove b we consider an initial state in R 4 . Suppose the control is non-singular. If we take u* 0 until
the system has drifted along x 2 2 to a point on O and then switch to u 1 , we can control the system to
the origin and the value of J is │ 2 │. This is an optimal control. Now suppose the control is singular. We must
have u v t , 0 v t 1 with corresponding state equations x1 x 2 , x 2 v t
which is integrated to give
t
x 2 2 v d ,
0
B a k u , A z e r b a i j a n | 353
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
t
x1 1 2 v d d
0 0
At time t1 the system is to be at x1 x 2 0 , so
t1
0 2 v d
0
t1 t1 t1
1 2 d
0 0
v d d 2 v d
0 0
t1
Thus there are an infinite number of functions v , 0 v 1 that satisfy (2). They are all optimal
t1 t1
since J │ u │ dt = v d =│ 2 │
0
0
Again, to prove (a) let us consider ( 1 , 2 ) in R1 . Consider first the non-singular controls that maximize
H. They are listed in (1) to get to O we need the control sequence 1,0, 1
u * 1 or . The switch from
u* 0 must take place at a point lying in R 4 with x 2 0 and since u * 1 at the start we have x 1 , so
*
the time taken to get from x 2 2 to x 2 is t 1 │ 2 │ . The control is then switched to u 0 and the
*
truck drifts uncontrolled (consuming no fuel) along x 2 until O is reached. Then u is switched to 1 and
t 2 say, and the system gets to O with x 2 1 so t1 t 2 0
The fuel consumed is
1 2 t1
No control sequence 1,0, 1 can give J its known minimum value. Singular controls arise when
2 1 or 1 , not just an isolated instant but for a time interval.
This will happen if A 0, B 1 or 1 . Such controls, which are of the form v t sgn B , 0 v t 1 ,
cannot change the sign. This means that no initial state in R1 can be driven to the origin by the singular control
that maximizes H . To see this, note that in R1 the x 2 -coordinate is negative that to drive the system closer to the
origin we need u 0 0 . Since the singular control cannot change sign, x 2 increases for all t and the system is
driven infinity. It is impossible to reach O from R1 using a singular control.
Also to prove b we consider an initial state in R 2 . Suppose the control is non-singular. If we take u* 0
until the system has drifted along x 2 2 to a point on O and then switch to u 1 , we can control the
354 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
system to the origin and the value of J is │ 2 │. This is an optimal control. Now suppose the control is singular.
We must have u v t , 0 v t 1 with corresponding state equations x1 x 2 , x 2 v t
t t
which is again integrated to give x 2 2 v d , x1 1 2 v d d
0 0 0
At time t1 the system is to be at x1 x 2 0 , so
t1 t1 t1
0 2 v d 1 2 d v d d
0 0 0 0
t1
2 v d
0
t1
Thus there are an infinite number of functions v , 0 v 1 that satisfy (2). They are all optimal
t1 t1
since J
0
│ u │ dt = v d =│
0
2 │.
In this section, we will discuss some applications of control theory. We will illustrate two examples in this
regard. The Pontryagin maximum principle [12] is a useful necessary condition which we can now use to solve a
range of control problems. We look first at the problem of controlling a linear system in a time optimal manner. The
truck problem is the simplest two-dimensional problem of this type. The general problem is dealt with in the next
k
section. For the truck problem the control u u t was subject to the constraint u ; in what follows the
m
constraint has been normalized to u 1 but there is no loss of generality. As was explained earlier we can also
Fig. 2(a). The Optimal Path Fig 2(b). The Optimal Path
B a k u , A z e r b a i j a n | 355
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
Since u J , so the set of varied end-points E must have a plane of support at D . Recall
minimizes
that is such that E lies on side of and the half-line on the other. The tangent to at D either lies
entirely in or passes through at D . We shall show that the second possibility leads to a contradiction, so for
optimality the tangent to at D must lie in . This geometric result can be expressed as a simple condition
involving the co-state variables at t1 and the tangent to the target curve in state space; the two-dimensional vector
1 t1 , 2 t1 T must be perpendicular to the tangent to the target at the optimal end-point. This is the
transversality condition. We can write it as follows: let v , be the tangent to at x1 t1 , x2 t1
T * *
given target point xt1 x1 by an admissible control (piecewise continuous and taking its values from the set U
t1
such that u 1 ), find the optimal control u * t for which J 1dt t1 t 0 is minimized .
t 0
Solution: We first need to write down the Hamiltonian H , x , u and find the co-state equations and
then maximize H as function of
We know that the Hamiltonian is given as,
H , x, u 1 1 ax1 bx2 lu 2 cx1 dx2 mu
1 1 ax1 bx2 2 cx1 dx2 l1 m2 u
Now the co- state equations are derived as
H , H
1 a 1 c 2 2 b1 d 2
x1 x2
.
where 1 .
t
Or in matrix notation, we have A
2
t the value of u u t that maximizes the Hamiltonian. We note that,
We now choose at each value of
H is linear in u , so to maximizes H we need u 1 or u 1 , depending on the sign of the coefficient
l1 m2 .
Thus the only controls that can lead to a minimum time of transfer are those of the form,
u* Sgn l1 m2 .
They are piecewise constant controls that are discontinuous at the zeros of
S l1 t m2 t . (3)
That is, they switch from 1 to -1 or from -1 to 1 whenever S 0 . For this reason S as defined by (3) is
called the switching function. In the interval between two zeros of S the control is constant, so the state equations
become autonomous,
x Ax 1u * , where u* 1 or 1 .
And the form of the trajectories in the x1 x2 - plane is easily found in each case. Provided that ad bc 0
*
The trajectories for u 1 will have an isolated singularity at the intersection of
ax1 bx2 l 0 and cx1 dx 2 m 0
356 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II
While the trajectories for u * 1 will have an isolated singularity at the intersection of
ax1 bx2 l 0 and cx1 dx 2 m 0
The behavior of both families of trajectories is determined by the eigenvalues of the system matrix A . The
trajectory pattern (their shape and the direction in which they are swept out as t increases) is the same as the
pattern of the trajectories of the uncontrolled autonomous system x Ax .
The only difference is that the whole phase plane pattern is translated so that the singularity is at the
solution of ax1 bx2 l 0 and cx1 dx 2 m 0 for u * 1 and the solution of ax1 bx 2 l 0 and
cx1 dx2 m 0 for u * 1.
We recall that a singular point in the phase-plane represents a solution that is constant for all t. None of the
trajectories of the system can pass through (or begin or end at) a singular point. In the following examples we use
phrases such as ‘the path RO’. Occasionally the point R is singularity and, strictly speaking, we should say ‘the
path RO with the singularity at R excluded’ but doing so would be very cumbersome. Provided the reader bears
this in mind there should be no confusion.
4. CONCLUSION
Now-a-days the applications of control theory to the real world problems have been more crucial than the
theoretical aspects. So bridging between theory and real world applications is the main objective to the present day
research in control theory. In this study the application of control theory in the aerospace dynamics is investigated.
We discussed the application of control theory and represented some problems on that theorem. We have also
discussed the time optimal control of linear systems. We have applied the theorem on landing a space vehicle for
optimal controlling its fuel. Finally we conclude from our discussion in Section 2 that in the regions R1 and R3 , it is
impossible to land a space vehicle controlling its fuel. But only in the region R 2 and R 4 , the space vehicle can be
landed with optimal controlling its fuel.
REFERENCES
B a k u , A z e r b a i j a n | 357