0% found this document useful (0 votes)
83 views

Introduction To Optimal Control

The document provides an introduction to optimal control problems. It defines optimal control as finding a controller that drives a system towards a desired operating condition while achieving a given performance criteria defined by a cost function J. It then classifies optimal control problems into different types based on the performance function used, such as minimum time, terminal, minimum effort, optimal servo, and regulator problems. Examples of each type are given to illustrate the problem formulations.

Uploaded by

Tsedenia Tamiru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views

Introduction To Optimal Control

The document provides an introduction to optimal control problems. It defines optimal control as finding a controller that drives a system towards a desired operating condition while achieving a given performance criteria defined by a cost function J. It then classifies optimal control problems into different types based on the performance function used, such as minimum time, terminal, minimum effort, optimal servo, and regulator problems. Examples of each type are given to illustrate the problem formulations.

Uploaded by

Tsedenia Tamiru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 1

Introduction to Optimal Control


Introduction
• Optimal control – is finding a controller that
drives a system towards a desired operating
condition while achieving a given performance
criteria
• The performance criteria is given as a cost
function J
• Most commonly used J

– (x(to),to), (x(tf),tf), is Condition at the beginning


and end of the control and
– (x(t),t) the cost in the entire interval
Introduction
• The type of function used for ,  in the
performance function determines the type of
optimal control
• Problem is constrained optimization problem
– Constraints may be given time, state condition,
amount of control effort or energy etc
• Solution techniques depend on the type of
function used for the initial and final conditions
Problem formulation

• Given a system

• With initial condition

• Find a control signal u that can


– drive the system to a final state
– Fulfill state constraint
– Minimize the function
• Min

• Subject to

• U<Umax

Classification of optimal control problems

• Optimal control problems can be classified into


various types based on the performance
function used
• These are
– Minimum time control problem
– Terminal control problem
– Minimum control effort problem
– Optimal servo mechanism
– Optimal regulator problem
Minimum time control problem
• Objective is to minimize the time required to
drive a system from its initial state to final
state
• J or performance function is
tf


J  dt
to

• Example: optimal control of robotic


manipulator to finish a task
Example for Minimum time control problem

• A rocket burn trajectory is desired to minimize a


travel time between a starting point and a final point,
10 units of distance away.
– The thrust can be between an upper limit of 1.1 and a
lower limit of -1.1.
– The initial and final velocity must be zero and the
maximum velocity can never exceed 1.7.
– It is also desirable to minimize the use of fuel to perform
the maneuver. There is a drag resistance that is
proportional to the square of the velocity and mass is lost
as the fuel is burned during thrust operations.
Example- minimum time problem
• Since the problem is minimum time problem, the
objective functions is minimize J given by
tf


J  dt
to
• This is similar to
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒𝑡 𝑓

• Once we determined the objective function, next


we have to determine the constraints. The first
constraint usually is the system dynamic equation
Example-minimum time problem
• The dynamic equation of the rocket is then
=v

𝑑𝑣 2
=(𝑢− 0.2 𝑣 )/𝑚
𝑑𝑡
𝑑𝑚
=−0.01 𝑢2
𝑑𝑡

• When the system is implemented, it will have


additional constraints. These are the maximum
and minimum control signal limits, the maximum
and minimum velocity which the system can have
• The last constraints are the patch constraints
Example- the minimum time control problem

• The final problem formulation will then be


𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒𝑡 𝑓
• Subject to 𝑠=𝑣
˙
˙ ( 𝑢 − 0.2 𝑣 2 ) / 𝑚
𝑣=
𝑚=−0.01
˙ 𝑢2
u ≤ 1.1
𝑢≥− 1.1
𝑣 ≤1.7
𝑣 ( 0 ) =0 𝑣 ( 𝑡 𝑓 )=0
𝑠 ( 0 ) =0 𝑠 ( 𝑡 𝑓 )=10 𝑈
Terminal control problem
• Objective: is to minimize the final error value,
i.e. given a desired value, find a control signal
that can drive the system towards a point
where final value has minimum deviation from
desired value
• J has the form of
J  [ x(t f )  d ]T S [ x(t f )  d ]
– d is the desired value of the system at final time
Example- Pick and place robotic manipulator

• Consider a third degree of freedom robotic


manipulator shown below
, and are links of the robot
, and are the joints which are driven by
motors.

The objective is to move the robotic


manipulator end effector from its position to
point A with minimum terminal error by
applying torque to the motors which drive the
joints.

Moreover, each angle should not be moved more than


its limits and each motor has a maximum torque limit .
Formulate the problem as a terminal control problem
Example- pick and place robot
• First we have to identify the objective function. As a
terminal control problem, the objective is to
minimize the terminal error. The error is the
difference between desired terminal
position/orientation and actual position/orientation
of the end effector.
• Assume the end effector desired position and
orientation at A is given by (3 pos. variable and 3
orientation var.) =Desired position and orientation D
Example- pick and place robot
• If the actual final position and orientation of
the robot is given by

[]
𝑥1
𝑥2
𝑥3
𝑥 (𝑡 𝑓 )=
𝑥4
𝑥5
𝑥6
• Then the error will also be six dimensional
variable given by -D
• Hence the objective function will then be
S = s11(x1-xa)^2+s22*(x2-yA)^2+…
Minimum control effort
• Objective: is to minimize the energy used to
drive a system from its initial state to a final
state
• J has the form
tf


J  U (t )T RU (t )dt
to

– R is a weighting matrix with dimension mxm


• Example: minimum fuel problem
Example: Reformulate the minimum time control
problem of the rocket as a minimum effort problem
• In rocket control, one important objective is
minimizing the control effort of fuel
consumed.
• The minimum time problem when formulated
as energy minimization, it will be given as
• J=int(1/2U^2dt)
Minimum effort rocket launch problem
dt

subject to

𝑠=𝑣
˙
˙ ( 𝑢 − 0.2 𝑣 2 ) / 𝑚
𝑣=
𝑚=−0.01
˙ 𝑢2
u ≤ 1.1
𝑢≥− 1.1
𝑣 ≤1.7
𝑣 ( 0 ) =0 𝑣 ( 𝑡 𝑓 )=0
𝑠 ( 0 ) =0 𝑠 ( 𝑡 𝑓 )=0
Optimal servomechanism or tracking
problem
• Objective: is to design a control signal which
minimizes the error between desired actual
path of a system
• J has the form
tf
tf
 
T
J  [ x(t )  d (t )] Q[ x(t )  d (t )]dt  e(t )T Qe (t )dt
to
to

• Example: satellite tracking, continuous path


control of a robot etc
• Move the robot from A to B with the given
trajectory x1(t)=
• x2(t)=

• J=int(e1^2+e2^2+…)
Optimal servomechanism contd
• The above problem can be extended to more
advanced forms
– Servomechanism with minimum control effort
tf


J  ([ x(t )  d (t )]T Q[ x(t )  d (t )]  U (t )T RU (t )) dt
to

– Servomechanism with minimum effort and


terminal control tf

J  [ X (t )  D (t )] S [ X (t )  D (t )]   ([ X (t )  d (t )] Q[ x (t )  d (t )]  U (t )
f f
T
f f
T T
RU (t )) dt
to
Optimal regulator problem
• This is a sub problem of the optimal
servomechanism where the final point is zero
• Objective: is to drive a system towards a point
where the final state value is zero.
• The performance measure J is given by
tf


J  [ X (t f )]T S [ X (t f )]  ([ X (t )]T Q[ x(t )]  U (t )T RU (t )) dt
to
Examples
• Consider a body of mass M moving along a
frictionless surface
u
M

xo , vo
x f ,v f

• Find a control signal u, bounded as -amax<u<amax


which can transfer the system from its initial
state to final state with minimum time.
Examples
• Model the system
d d2
F  ma  m (v (t ))  m 2 ( x(t ))
dt dt
x 1  x2
x 2  ( F / m)  u

• The performance measure J is then


tf


J  dt  t f
0
Examples
• If friction of the surface us considered in the
motion of the above body, the SS model
dx d d2
F b  ma  m (v(t ))  m 2 ( x(t ))
dt dt dt
x 1  x2
F b b
x 2   x2  u  x2
m m m
• The cost function can be taken to be the same
• Min J=tf time optimal problem formulation for the
frictionless problem
• Subject to
– X1d=x2
– X2d=F/M=U
– U<amax
– U>amin

– Min J=int(0.5u^2)dt min energy problem for the


frictionless surface
Conclusion
• When considering optimal control problems,
the following four points need to be
considered
– Existence of solution
– Uniqueness
– Main features of the optimal solution
– The form of the solution
• Linear system solution usually exists – if the control
signal constraint is not very strict (U<Umax)

• U(t) – control signal – undesired because it is open


loop – change to closed loop
• X(t) – state trajectory
• U(t) is it robust ( when parameters are changed,
external disturbance occur, will the system perform
the same?) optimal robust controller

You might also like