0% found this document useful (0 votes)
273 views10 pages

Optimal Control Theory With Aerospace Applications

This document discusses optimal control theory and its applications in aerospace engineering. It begins by introducing optimal control theory as a branch of dynamic optimization and calculus of variations. It then discusses how optimal control theory plays a central role in many engineering applications, such as aeronautics and robotics. Specifically, the document determines the Hamiltonian and investigates applications of optimal control theory for controlling the fuel used when landing a space vehicle. It derives the necessary conditions for optimality using Lagrange multipliers to account for the state equation constraints. In summary, the document outlines optimal control theory and applies it to optimally control fuel usage during space vehicle landings.

Uploaded by

guadbe878
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
273 views10 pages

Optimal Control Theory With Aerospace Applications

This document discusses optimal control theory and its applications in aerospace engineering. It begins by introducing optimal control theory as a branch of dynamic optimization and calculus of variations. It then discusses how optimal control theory plays a central role in many engineering applications, such as aeronautics and robotics. Specifically, the document determines the Hamiltonian and investigates applications of optimal control theory for controlling the fuel used when landing a space vehicle. It derives the necessary conditions for optimality using Lagrange multipliers to account for the state equation constraints. In summary, the document outlines optimal control theory and applies it to optimally control fuel usage during space vehicle landings.

Uploaded by

guadbe878
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/215741719

Optimal Control Theory and It’s Applications in Aerospace Engineering

Article · January 2011

CITATIONS READS

7 1,820

4 authors, including:

Md. Haider Ali Biswas Munnujahan Ara


Khulna University Khulna University
110 PUBLICATIONS   632 CITATIONS    33 PUBLICATIONS   130 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Optimal control with constraints applied to biological and biomedical models View project

Necessary Conditions for Optimal Control Problems with State Constraints: Theory and Applications View project

All content following this page was uploaded by Munnujahan Ara on 04 May 2014.

The user has requested enhancement of the downloaded file.


INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II

OPTIMAL CONTROL THEORY AND IT’S APPLICATIONS


IN AEROSPACE ENGINEERING
Md. Haider Ali Biswas*, Md. Azmol Huda, Munnujahan Ara, Md. Ashikur Rahman

Mathematics Discipline, Khulna University, Khulna-9208 (BANGLADESH)


*Corresponding author: [email protected].

ABSTRACT

Control theory is one of the most important mathematical milestones in present century. At present there are
many branches of science and technology in which control theory plays a central role and faces fascinating
challenges. In some cases, one expects to solve the problem by means of technologies development that will make
possible to implement more sophisticated control mechanisms. In this study we have briefly mentioned some of the
fields in which these challenges are present. Our main objectives are to investigate some aspects of calculus of
variations and the control theory as well as their scopes and applications. We have studied here the applications of
control theory in landing a space vehicle for optimal controlling the fuel.

Key words: Control theory, Space vehicle, Hamiltonian H.

1. INTRODUCTION

The Control theory is used in almost every fields of the modern sciences. The optimal control theory is a
branch of dynamic optimization as well as it is a generalization of calculus of variations. See [1], [4], [5], [6], [11]
and [14] for more details as well as for the history of control theory. The theory of optimization continues to be an
area of active research not only for mathematicians but also for engineers and thus is an indication both of the
inherent beauty of the subject and of its relevance to modern developments in engineering, science, industry and
commerce. The optimal control is now playing a central role in many engineering applications, specially in the
systems and control engineering such as robotics and aeronautics (see for examples [10] and [13]); in the life
sciences such as sustainable forest management (see for example [2]); in the mathematical biology and medicine
such as modeling and optimal controlling the infectious diseases (see for examples [8] and [9]). In the past few
decades, there has been an overwhelming demand for the development of the technology enabling successful
applications of control theory in aerospace engineering. In this paper our aim is to investigate the optimal control
theory and some of its applications specially in controlling the fuel while landing of a space vehicle.

1.1 Determination of Hamiltonian H.


Hamiltonian plays a significant role in deriving the necessary conditions of optimality for optimal control
problems (see [3] , [7] and [12] for details study on Hamiltonian). Before going to discuss our main issue, we first
determine the Hamiltonian and in this case we shall restrict ourselves to one control variable u so that U is
closed interval on the real line. The state equations are then
  
h  u1 x1 , x2 , t , m   u 2 x1 , x 2 , t 
Let u * t  be an optimal control and x*  t  the corresponding optimal path. Consider a small variation of

u* such that u  u*  u  t  with corresponding path x 1


*
 x1 , x2*  x 2  . This will not arrive at x1 at t1 but

at a slightly different time t1  t . The end conditions give xi*  t1  t   xi  t1  t   xi , i  1, 2
As usual in variation arguments we are in the first instance interested only in first order effects and from the
conditions we deduce that xi  t1   xi*  t1  t  0 , i  1,2
If we now use the state equations we obtain
xi t1   u i t1 t , where ui t1  denotes ui x1  t1  , x2  t1  , u  t1  .
 
* * *

To simplify the notation we will let u i denoted by ui  x1*  t1  , x2*  t1  , u*  t1   and we adopt the same

convention for ui and ui .


x j u
Then the consequent change J in J is
t1 t t1

J   u0  x1*  x1 , x2*  x2 , u*  u  dt   u0  x1* , x2* , u *  dt


t0 t0

B a k u , A z e r b a i j a n | 349
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II

t
1
 u u u 
   0 x1  0 x2  0 u dt  u0  t1  t  0  u 
t0  x1 x2 u 
2
 
The derivatives in the integrand are evaluated on the optimal trajectory. Let J denotes the first variation.
*
If u is optimal it is necessary that the first variation J is zero. So
t1
J =   u0 x1  u0 x2  u0 udt  u0 t1 t  0
t0  x1 x 2 u 
on an optimal path for variations.
The partial derivatives u , x1 , x 2 are not independent here, they are linked by the state equations.
These are because of the constrained optimal control problems dealt in many literatures (see for examples [4], [5],
[11] and [14]) . In this case, we simply need to introduce two Lagrange multipliers 1  t  and 2  t  . We have
chosen them to be time-dependent. Now consider pair of integrals
t1

 i   1 t x1  u i x1 , x 2 dt , i  1, 2


t0

They are both zero because the state equations must be satisfied. If we now let u* be optimal and we
calculate the first variation
i  0 Since i  0 for all.
Then a straight forward calculation is given by
t1
 u u u d 
i    i t  i x1  i x 2  i u  xi dt
t0  x1 x 2 u dt 
t1 1 t
d
Now
 i t  xi dt   i t xi tt10    ixi dt
t0
dt t0
t1
=  ui t1  i t1 t    ixi dt
t0

Since x i t 0   0 and x i t1   u i t dt


Thus,
t1
 u u u 
i    i  t   i x1  i x2  i u  dt
t0  x1 x2 u 
t1

 ixi dt  ui  t1  i  t1  t  0
t0

J  0
The condition that can now be replaced by the condition that
J  1   2  0
On substituting for J , 1 and 2 and rearranging the terms we obtain,
t1
 u 0 u1 u 
 x  x 1  1   2 2  1  +
t0 1 x1 x1 
t1
 u 0 u u 
 x 2    1 1   2 2   2 dt +
t0  x 2 x 2 x 2 
t1
 u0 u1 u 
 u  u  1  2 2 dt 
t0
u u 
u t   u  t   t   u t  t  t  0
0 1 1 1 1 1 1 1 1 1

This will be written more compactly if we introduce the Hamiltonian function


H  u0  x1 , x 2 , u   1u1  x1 , x 2 , u    2u2  x1 , x 2 , u 
Then we have
t1 t t
 H  1
 H  1
H
t 1  x1 1  t 2  x2 2  t u u dt  H t1  t  0
x      dt  x      dt 
0 0 0

350 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II

for admissible variations u , x1 , x 2 where as usual, the derivatives are evaluated on the optimal path.
The multipliers 1 ,  2 are at our disposal and if we choose them to satisfy the equations
H
 i   , i  1, 2
x i
Then the condition no longer involves x1 and x2 . It becomes
t1
H
 u udt  H t t  0 for allowed variations.
t0
1

Now we consider the variations u *  u for which t  0 ; that is the corresponding solutions for x arrive
at x  at t  t  then our condition becomes
t1
H
 u udt  0 for all admissible u.
t0

From this we can deduce that H  0 at every point on an optimal trajectory. Furthermore we observe that
u
if we allow variations for which t  0 . We must still have H  0 at every point. So that it is also necessary
u
that H t1   0 at the end-point of an optimal trajectory. Thus a necessary condition for optimality is that H  0
u
at each point on the optimal path and H  0 at t  t1 on the optimal path, where H  u 0   1u1   2 u 2
and the function  i satisfies the equations   
H .
i
x i
These equations are called the co-state equations and H is sometimes referred to as the Hamiltonian.

2. FUEL OPTIMAL LANDING OF THE SPACE VEHICLE


We are now in a position to discuss our main problem; that is, the soft and optimal fuel consumed landing
of a space vehicle. We assume that a space vehicle on a vertical trajectory tries to land smoothly on the surface of
a planet. We denote by h(t ) , the height at time t so that v ( t )  h ' (t ) is the instance velocity of the space vehicle
(see for details [5]).
Since combustible is being consumed the mass m(t ) of the vehicle non increasing function of t , if we call
u (t ) the instantaneous upwards thrust. Newton’s law gives
m (t ) h '' (t )   gm (t )  u (t ) where g is the acceleration of gravity assuming that the thrust is proportional
to the rate of decrease of mass is proportional to the rate at which combustible is used up m ' (t )   u (t ) which
implies that m ' (t )   ku (t ) , where k 0
'
We introduce v(t )  h (t ) as a variable and we obtain the following first order system of differential
equation h  t   v  t  .

Then, mv  t   mh  t    gm  t   u  t   v t    g  u t
m
At the initial time t 0  0 , we have initial conditions, h0  h0 , v o   v 0 , m 0   m 0 .
The vehicle will land softly at time t 0 if ht   0 .

Trajectory of rocket in h , v , m  plane

Trajectory of rocket in h, v  plane


Fig. 1.(a) Landing of a space vehicle

B a k u , A z e r b a i j a n | 351
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II


The thrust cannot be negative or arbitrarily large 0  u t  R , for some R  0 .
We have an optimization problem if we try to land the space vehicle minimizing the amount of combustible
t
m0   mt   k  u t dt = J u 
0

H  1 x2   2 u 2  u
is to be maximized as a function of u . Now we can write u in the form │ u │ sgn u and hence expression
H in the form S   2 sgn u  1 .
There are three possibilities for sgn S :
(i) If │  2 │<1 then sgn S  0 , so H will be maximized by u  0 ,
(ii) If │  2 │>1 then sgn S  sgn  2 sgn u , so H will be maximized by u  sgn  2
(iii) If │  2 │=1 then S  sgn  2 sgn u  1 in which case
 2 for sgn u   sgn 2
S 
0 for sgn u  sgn 2
Thus we are forced to choose sgn u  sgn  2 and we find that the control is not completely determined;
its sign is known but its magnitude is indeterminate. We can only say that u  v t sgn  where 0  v t   1 .
Thus the control satisfying the Pontryagin maximum principle [10] can be written,
0 if  2  1
sgn  if  2  1
*  2
u 
v  t  sgn 2 if  2  1
0  v  t   1

The co-state variables are found to be  1  A,  2  B  At . │  2 │  1 only at isolated times
(since A  0) then u will be indeterminate at isolated instants as it switches between -1 and 0 or 1 and 0 . If
A  0 and │ B │=1 then u * is indeterminate for all t ; the control is singular.

First let us consider the non-singular controls that maximize the Hamiltonian H.
Since 2  B  At and A  0 , u * can take only the values 1,  1 and 0. The corresponding trajectories
are two families of parabolas x22  2u* x1  k , u *  1 and a family of straight lines x 2  l corresponding to
u*  0 . Note that when u*  0 we have x1  x 2 , x 2  0 so there is a line of singularities on x 2  0 . This
*
means that no optimal control can end with u  0 . Thus the only non-singular control sequences are
1, 0,1 , 0,1 , 1 and 1, 0, 1 , 0, 1 , 1 (1)
since  2 is linear in t .
Unfortunately we cannot construct an optimal solution from a general initial point using these control
sequences. Let us calculate the fuel consumed in going from an initial point  1, 2  to 0,0 using any admissible
control.
t
On any path x 2  u , │ u │  1 so x 2   2   u t dt
0
t1

Now x 2 t1   0 so 0   2   u t dt ,


0
t1 t1

Hence │  2 │=│  ut dt │  


0 0
│u t  │ dt = J
and so J  │  2 │.

352 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II


If we find a u t such that the corresponding value of J is │  2 │, then this must be optimal. We can show
that there are some initial states for which there is an infinite number of fuel-optimal controls and other states for
which there is no fuel-optimal control.

Fig. 1(b) Phase Plane

Let us divide the phase plane as shown in Fig. 1(b) and we observe that O  is the half-parabola
2  2
x  2x1, x2  0
2 and similarly O is the half-parabola x  2x1, x2  0.
2 The region R1 lies above O 
and x2  0, x1  0 . It includes x2  0, x1  0 and excludes O  . The region R 2 is between O and
x2  0, x1  0 . It includes O  and excludes x2  0, x1  0 .
Now it is easy to show that
(a) For (  1 ,  2 ) in R1 or R3 there is no optimal control.
(b) For (  1 ,  2 ) in R 2 or R 4 there are infinitely many optimal controls.

To prove (a) we consider (  1 ,  2 ) in R3 . Consider first the non-singular controls that maximize H . They

are listed in (1) to get to O we need the control sequence 1,0, 1  . The switch from u *  1 or u*  0 must
take place at a point lying in R 2 with x 2    0 and since u *  1 at the start we have x   1 , so the time taken
to get from x 2   2 to x 2   is t 1  │  2 │   . The control is then switched to u*  0 and the truck drifts
uncontrolled (consuming no fuel) along x 2   until O  is reached. Then u * is switched to  1 and t   2
say, and the system gets to o with x 2   1 so t1  t 2  0
1 2 t1

J │ 1│ dt + │1 dt │dt =│  2 │+2  .


The fuel consumed is
0
 0dt + 
1 0

No control sequence 1,0, 1 can give J its known minimum value. Singular controls arise when
2  1 or  1 , not just an isolated instant but for a time interval. This will happen if A  0, B  1 or  1 . Such
controls, which are of the form v t  sgn B , 0  v t   1 , cannot change the sign. This means that no initial state
in R3 can be driven to the origin by the singular control that maximizes H . To see this, note that in R3 the x 2 -
coordinate is negative that to drive the system closer to the origin we need u 0   0 . Since the singular control
cannot change sign, x 2 increases for all t and the system is driven infinity. It is impossible to reach O from R3
using a singular control. Thus there is no optimal control for 1 , 2  in R3 .
To prove b we consider an initial state in R 4 . Suppose the control is non-singular. If we take u*  0 until
the system has drifted along x 2   2 to a point on O  and then switch to u  1 , we can control the system to
the origin and the value of J is │  2 │. This is an optimal control. Now suppose the control is singular. We must
have u  v  t  , 0  v  t   1 with corresponding state equations x1  x 2 , x  2  v t 
which is integrated to give
t
x 2   2   v d ,
0

B a k u , A z e r b a i j a n | 353
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II

 t
 
x1   1    2   v d d
0 0 
At time t1 the system is to be at x1  x 2  0 , so
t1

0   2   v d
0
t1 t1  t1

 1    2 d  
0 0
 v d d   2  v d
0 0
t1 

 1   2 t1    v dd


0 0
(2)

Thus there are an infinite number of functions v   , 0  v    1 that satisfy (2). They are all optimal
t1 t1
since J  │ u │ dt = v d =│  2 │

0

0

Fig. 1 (c). Phase Plane

Again, to prove (a) let us consider (  1 ,  2 ) in R1 . Consider first the non-singular controls that maximize
H. They are listed in (1) to get to O we need the control sequence 1,0, 1 
u *  1 or . The switch from

u*  0 must take place at a point lying in R 4 with x 2    0 and since u *  1 at the start we have x   1 , so
*
the time taken to get from x 2   2 to x 2   is t 1  │  2 │   . The control is then switched to u  0 and the
 *
truck drifts uncontrolled (consuming no fuel) along x 2   until O is reached. Then u is switched to  1 and
t   2 say, and the system gets to O with x 2   1 so t1  t 2  0
The fuel consumed is
1 2 t1

J │ 1│ dt + │1 dt │dt =│  2 │+2  .


0
 0dt + 
1 0

No control sequence 1,0, 1 can give J its known minimum value. Singular controls arise when
2  1 or  1 , not just an isolated instant but for a time interval.
This will happen if A  0, B  1 or  1 . Such controls, which are of the form v t  sgn B , 0  v t   1 ,
cannot change the sign. This means that no initial state in R1 can be driven to the origin by the singular control
that maximizes H . To see this, note that in R1 the x 2 -coordinate is negative that to drive the system closer to the
origin we need u 0   0 . Since the singular control cannot change sign, x 2 increases for all t and the system is
driven infinity. It is impossible to reach O from R1 using a singular control.
Also to prove b we consider an initial state in R 2 . Suppose the control is non-singular. If we take u*  0
until the system has drifted along x 2   2 to a point on O  and then switch to u  1 , we can control the

354 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II

system to the origin and the value of J is │  2 │. This is an optimal control. Now suppose the control is singular.
We must have u  v  t  , 0  v  t   1 with corresponding state equations x1  x 2 , x  2  v t 
t  t
 
which is again integrated to give x 2   2   v d , x1  1    2   v d d
0 0 0 
At time t1 the system is to be at x1  x 2  0 , so
t1 t1 t1 
0   2   v d  1    2 d    v d d
0 0 0 0
t1

  2  v d
0
t1 

 1   2 t1    v dd


0 0

Thus there are an infinite number of functions v   , 0  v    1 that satisfy (2). They are all optimal
t1 t1

since J 
0
│ u │ dt =  v d =│ 
0
2 │.

3. APPLICATIONS OF CONTROL THEORY

In this section, we will discuss some applications of control theory. We will illustrate two examples in this
regard. The Pontryagin maximum principle [12] is a useful necessary condition which we can now use to solve a
range of control problems. We look first at the problem of controlling a linear system in a time optimal manner. The
truck problem is the simplest two-dimensional problem of this type. The general problem is dealt with in the next
k
section. For the truck problem the control u  u t  was subject to the constraint u  ; in what follows the
m
constraint has been normalized to u 1 but there is no loss of generality. As was explained earlier we can also

set  0  1 in any application of the Pontryagin theorem without loss of generality.


Problem 1.
Suppose the system x1  f 1  x1 , x 2 , u  , x 2  f 2 x1 , x 2 , u  is to be controlled from x0 at t0 to some
t1

point on the curve g  x1 , x 2   0 at some time t1 in such a way that J   f x , x , u dt


t0
0 1 2 is minimized. Find

the optimal control.


Solution: Suppose that the problem has been solved so that u  controls the system from x0 to a point on
the target curve l and minimizes J . In the augmented state space the optimal path ends at the point D on the
* *
curve l  defined by g  x1 , x 2   0 , x0  x0 where x0 is the minimum value of the cost J

Fig. 2(a). The Optimal Path Fig 2(b). The Optimal Path

B a k u , A z e r b a i j a n | 355
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II


Since u J , so the set of varied end-points E must have a plane of support  at D . Recall
minimizes
that  is such that E lies on side of  and the half-line  on the other. The tangent to  at D either lies
entirely in  or passes through  at D . We shall show that the second possibility leads to a contradiction, so for
optimality the tangent to   at D must lie in  . This geometric result can be expressed as a simple condition
involving the co-state variables at t1 and the tangent to the target curve in state space; the two-dimensional vector

 1 t1 , 2 t1 T must be perpendicular to the tangent to the target  at the optimal end-point. This is the
transversality condition. We can write it as follows: let v   ,   be the tangent to  at  x1  t1  , x2  t1  
T * *

then  1 t1    2 t1    0 at the end-point.


3.1 Time-optimal Control of Linear Systems
We consider here systems with two variables x1 t , x 2 t  describing the state of the system and a single
control variable u t  that is forced to take its values in such a way that u  1 . We allow u to be piecewise
continuous and let the system be governed by a pair of linear differential equations,
x  ax1  bx2  lu x 2  cx1  dx 2  mu
with u  1 and a, b, c, d , l , m are given constants.
In matrix notation the above system can be written as x  Ax  lu
where A  
a b  and l 
 c d  l   
  m
Problem 2: Given that system x  Ax  lu can be controlled from a given initial point xt 0   x 0 to a

given target point xt1   x1 by an admissible control (piecewise continuous and taking its values from the set U
t1
such that u  1 ), find the optimal control u * t  for which J   1dt  t1  t 0 is minimized .
t 0

Solution: We first need to write down the Hamiltonian H  , x , u  and find the co-state equations and
then maximize H as function of 
We know that the Hamiltonian is given as,
H   , x, u   1   1  ax1  bx2  lu    2  cx1  dx2  mu 
 1 1 ax1  bx2   2 cx1  dx2   l1  m2 u
Now the co- state equations are derived as
H , H
1     a 1  c 2  2    b1  d 2
x1 x2
.
where     1  .
t
Or in matrix notation, we have    A 
 
 2
t the value of u  u t  that maximizes the Hamiltonian. We note that,
We now choose at each value of
H is linear in u , so to maximizes H we need u  1 or u  1 , depending on the sign of the coefficient
l1  m2 .
Thus the only controls that can lead to a minimum time of transfer are those of the form,
u*  Sgn  l1  m2  .
They are piecewise constant controls that are discontinuous at the zeros of
S  l1 t  m2 t .  (3)
That is, they switch from 1 to -1 or from -1 to 1 whenever S  0 . For this reason S as defined by (3) is
called the switching function. In the interval between two zeros of S the control is constant, so the state equations
become autonomous,
x  Ax  1u * , where u*  1 or  1 .
And the form of the trajectories in the x1 x2 - plane is easily found in each case. Provided that ad  bc  0
*
The trajectories for u  1 will have an isolated singularity at the intersection of
ax1  bx2  l  0 and cx1  dx 2  m  0

356 | www.ijar.lit.az
INTERNATIONAL JOURNAL Of ACADEMIC RESEARCH Vol. 3. No. 2. March, 2011, Part II

While the trajectories for u *  1 will have an isolated singularity at the intersection of
ax1  bx2  l  0 and cx1  dx 2  m  0
The behavior of both families of trajectories is determined by the eigenvalues of the system matrix A . The
trajectory pattern (their shape and the direction in which they are swept out as t increases) is the same as the
pattern of the trajectories of the uncontrolled autonomous system x  Ax .
The only difference is that the whole phase plane pattern is translated so that the singularity is at the
solution of ax1  bx2  l  0 and cx1  dx 2  m  0 for u *  1 and the solution of ax1  bx 2  l  0 and
cx1  dx2  m  0 for u *  1.

We recall that a singular point in the phase-plane represents a solution that is constant for all t. None of the
trajectories of the system can pass through (or begin or end at) a singular point. In the following examples we use
phrases such as ‘the path RO’. Occasionally the point R is singularity and, strictly speaking, we should say ‘the
path RO with the singularity at R excluded’ but doing so would be very cumbersome. Provided the reader bears
this in mind there should be no confusion.

4. CONCLUSION

Now-a-days the applications of control theory to the real world problems have been more crucial than the
theoretical aspects. So bridging between theory and real world applications is the main objective to the present day
research in control theory. In this study the application of control theory in the aerospace dynamics is investigated.
We discussed the application of control theory and represented some problems on that theorem. We have also
discussed the time optimal control of linear systems. We have applied the theorem on landing a space vehicle for
optimal controlling its fuel. Finally we conclude from our discussion in Section 2 that in the regions R1 and R3 , it is
impossible to land a space vehicle controlling its fuel. But only in the region R 2 and R 4 , the space vehicle can be
landed with optimal controlling its fuel.

REFERENCES

1. Athans, M. and Falb, P. L. 1966. Optimal Control, Mcgraw-hill, New York.


2. Biswas, M. H. A.; Ara, M.; Haque, M. N. and Rahman, M. A. 2011. Application of Control Theory
in the Efficient and Sustainable Forest Management. International Journal of Scientific and
Engineering Research, Vol. 2 No. 3 , 2011.
3. Biswas, S. N. 1998. Classical Mechanics, First Publication, Books and Allied (P) Ltd, New Delhi,
India.
4. Boltyyanskii, V. G. 1971. Mathematical Methods of Optimal Control. Holt, Rinehart and Winston,
New York.
5. Fattorini, H. O. 1999. Infinite Dimensional Optimization and Control Theory. Cambridge
University Press, London.
6. Glowinski, R. 1984. Numerical Methods for Nonlinear Variational Problems (2nd Edition),
Springer-Verlag Publications, New York.
7. Gupta, S. L.; Kumar, V. & Sharma, H. V. 1987. Classical Mechanics, Revised Edition, Pragati
Prakashan, New Delhi, India.
8. Kirschner, D., Lenhart, S. and Serbin, S., Optimal Control of the Chemotherapy of HIV, J. Math.
Biol, 35, 775-792 (1997).
9. Lenhart, S. and Workman, J. T. 2007. Optimal Control Applied to Biological Models. Chapman &
Hall, CRC Press, USA.
10. Miele, A.; Weeks, M. W. and Ciarcia, M. 2007. Optimal Trajectories for Spacecraft Rendezvous,
J Optim Theory Appl., (2007) 132: 353- 376.
11. Pinch, E. R. 1993. Optimal Control and the Calculus of Variations, Oxford University Press Inc.,
New York.
12. Pontryagin, L.; Boltyyanskii, L. S.; Gamkrelidze, R. V. and Mishchenko, E. F. 1964. The
Mathematical Theory of Optimal Processes. Pergamon press, Oxford.
13. Stodola, P. and Mazal, J. 2010. Optimal Location and Motion of Autonomous Unmanned Ground
Vehicles. Wseas Transactions on Signal Processing, Issue 2, Volume 6, pp. 68-77, 2010.
14. Vinter, R. B. Optimal Control. Birkhauser, Boston, 2000.

B a k u , A z e r b a i j a n | 357

View publication stats

You might also like