0% found this document useful (0 votes)
96 views

Nonlinear Control and Servo Systems (FRTN05)

The document is an exam for a course on nonlinear control and servo systems. It contains 4 problems to solve worth a total of 25 points. The problems involve analyzing dynamical systems, determining stability of equilibrium points and limit cycles, and assessing the stability of a linear system with negative feedback and a static nonlinearity. Grades are awarded on a preliminary scale of 3, 4, or 5 based on achieving 12-16.5, 17-21.5, or 22-25 points respectively. Standard aids are allowed except for specific restricted materials. The problems can generally be solved independently and the examiner wishes students good luck.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views

Nonlinear Control and Servo Systems (FRTN05)

The document is an exam for a course on nonlinear control and servo systems. It contains 4 problems to solve worth a total of 25 points. The problems involve analyzing dynamical systems, determining stability of equilibrium points and limit cycles, and assessing the stability of a linear system with negative feedback and a static nonlinearity. Grades are awarded on a preliminary scale of 3, 4, or 5 based on achieving 12-16.5, 17-21.5, or 22-25 points respectively. Standard aids are allowed except for specific restricted materials. The problems can generally be solved independently and the examiner wishes students good luck.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Department of

AUTOMATIC CONTROL

Nonlinear Control and Servo Systems (FRTN05)


Exam – January 3, 2018, 08:00 – 13:00

Points and grades


All answers must include a clear motivation. The total number of points is 25. The
maximum number of points is specified for each subproblem.

Preliminary grades:
3: 12 − 16.5 points
4: 17 − 21.5 points
5: 22 − 25 points

Accepted aid
All course material, except for exercises, old exams, lab instructions, and solutions of
these, may be used as well as standard mathematical tables and authorized “Formel-
samling i reglerteknik”/”Collection of Formulae”. Pocket calculator.

Note!
In many cases the subproblems can be solved independently of each other.

Good Luck!

1
1. Consider the dynamical system

ÿ + sin(ẏ) = −5y 2 + 5

a. Write the system on state-space form. (1 p)


b. Find the equilibrium points. (1 p)
c. Determine the character of each equilibrium point as, e.g., stable node, unstable
focus, etc. (2 p)

Solution

a. Introduce the states x1 = y and x2 = ẏ. The state-space form is then

x˙1 = x2 (1)
x˙2 = −5x21 − sin(x2 ) + 5 (2)

b. The time derivatives are zero in the equilibria, i.e.,

0 = x2 (3)
0 = −5x21 − sin(x2 ) + 5 (4)

Insertion of the first equation into the second yields −5x21 + 5 = 0 ⇔ −x21 +
1 = 0 ⇔ x1 = ±1. Hence, there are two equilibrium points, and these are
(x1 , x2 ) = (−1, 0) and (x1 , x2 ) = (1, 0)
c. Denote by f (x) the right-hand side of state-space representation. First, we
linearize the system.
!
df 0 1
= (5)
dx −10x01 − cos(x02 )
For (-1,0): Insertion of the equilibrium point yields
!
df 0 1
= (6)
dx 10 −1

The eigenvalues are λ1 = 2.7016 and λ2 = −3.7016. Hence, this is a saddle


point.
For (1,0): Insertion of the equilibrium point yields
!
df 0 1
= (7)
dx −10 −1
p
The eigenvalues are λ1,2 = −1/2 ± 39/4i. This is therefore a stable focus.

2. Consider the system given by

q̈ = −(q 3 + q̇)

Is the system globally asymptotically stable?


Hint: It might be useful to introduce the states x1 = q, x2 = q̇, and consider a
function on the form V (x) = cx41 + dx22 . (3 p)

2
Solution
Using the states as suggested by the hint, we write the system on state-space
form. (
ẋ1 = x2
(8)
ẋ2 = −x31 − x2
We see that the origin is an equilibrium point. With c and d positive constants,
we have

V (0) = 0
V (x) > 0 for all x 6= 0
V (x) → ∞ as ||x|| → ∞

We further have V̇ = 4cx31 ẋ1 +2bx2 ẋ2 = 4cx31 x2 −2bx2 (x31 +x2 ). Let for instance
c = 1, d = 2. Then V̇ = −4x22 ≤ 0. The only solution of the state-space system
that yields V̇ = 0 is (x1 , x2 ) = (0, 0). It therefore follows from LaSalle’s theorem
that the origin is globally asymptotically stable.

3. Consider the system

ẋ1 = x2 + x1 (2 − x21 − x22 )


ẋ2 = −x1 + x2 (2 − x21 − x22 )

A solution to this system is given by


√ √
(x01 , x02 ) = ( 2 sin(t), 2 cos(t))

a. Linearize the system around the limit cycle. In particular, write the linearized
system on the form

δ̇ = A(t)δ

where A(t) is a time-dependent system matrix, and δ = x − x0 . (1 p)

b. The trajectory is periodic and can thus be seen as a limit cycle. Determine
whether this limit cycle is stable or not.
Hint: For this subproblem, introduce polar coordinates r and θ, i.e., let

x1 = r cos(θ)
x2 = r sin(θ)

(2 p)

Solution

a. In compact notation we have:


ẋ = f (x)
Introduce δ = x(t) − x0 (t) as the deviation from the nominal trajectory. We
have
ẋ = ẋ0 + δ̇

3
and the first order Taylor expansion of f around x0 (t) is given by

∂f (x0 )
ẋ = f (x0 ) + δ
∂x
So
∂f (x0 )
ẋ0 + δ̇ = f (x0 ) +
δ
∂x
Since x0 (t) is a solution to the state equation we have ẋ0 = f (x0 ) and thus

∂f (x0 (t))
δ̇ = δ = A(t)δ
∂x
where
 
∂f1 (x0 (t)) ∂f1 (x0 (t)) !
∂x1 ∂x2 −4 sin2 (t) 1 − 4 sin(t) cos(t)
A(t) =  = .
∂f2 (x0 (t)) ∂f2 (x0 (t)) −1 − 4 sin(t) cos(t) −4 cos2 (t)
∂x1 ∂x2

b. To determine stability of the limit cycle, we introduce polar coordinates. With


r ≥ 0:

x1 = r cos(θ)
x2 = r sin(θ)

Differentiating both sides gives


! ! !
ẋ1 cos(θ) −r sin(θ) ṙ
=
ẋ2 sin(θ) r cos(θ) θ̇

Inverting the matrix gives:


! ! !
ṙ 1 r cos(θ) r sin(θ) ẋ1
=
θ̇ r − sin(θ) cos(θ) ẋ2

Plugging in the state equations results in:

ṙ = r(2 − r2 ) (9)
θ̇ = −1 (10)

We see that the the only
√ equilibrium points are 0 and 2 (since r ≥ 0).
Linearizing around r = 2 (i.e. the limit cycle) gives:

r̃˙ = −4r̃

Since the eigenvalue is strictly negative, this implies that r = 2 is a locally
asymptotically stable equilibrium point. Hence the limit cycle is stable.

4. Consider an asymptotically stable linear system G(s), of which the Nyquist plot
is shown in Figure 1. The linear system is connected with negative feedback to
a static nonlinearity, Ψ, as shown in Figure 2.

4
Nyquist Diagram
1

0.8

0.6

0.4

0.2
Imaginary Axis

-0.2

-0.4

-0.6

-0.8

-1
-1 -0.5 0 0.5 1
Real Axis

Figure 1 Nyquist plot of G(s) in Problem 4.

+ P (s)

−Ψ(·) +

Figure 2 Feedback in Problem 4.

a. Use the Circle criterion to determine a sector for Ψ, as large as you can, such
that the feedback system is BIBO stable. (2 p)
b. Is the system BIBO stable for Ψ(·) = sin(·)? (1 p)

Solution

a. Figure 3 shows that the Nyquist plot lies inside the disk D(−0.5; 1). The max-
imal stability sector is therefore (−1; 2).
b. The function Ψ(x) = sin(x) is for instance bounded by the lines l1 = −x and
l2 = x. Since this is within the stability sector (−1; 2), the feedback system is
stable.

5. A team of engineers are working together to build a self-driving electric car.


They have a sketch ready, but still a lot of work remains. One subproblem
consists of designing a sliding mode controller for the system

5
Figure 3 Nyquist plot of G(s)

ẋ1 = x2 + u
ẋ2 = x1

with the switch function σ(x) = x1 + 2x2 . They turn to you for advice, since
they lack experience in control design.
a. Design a sliding mode controller with the switch function σ(x). (2 p)
b. Determine the sliding set and the sliding dynamics. (1 p)

Solution

a. The control law is


pT Ax µ
u=− − T sign(σ(x)) (11)
pT B p B
where

σ(x) = pT x = 0 (12)

Hence, we have pT = [1 2], and this yields that

u = −(2x1 + x2 ) − µsign(x1 + 2x2 ) (13)

6
b. The sliding set is where σ(x) = x1 + 2x2 = 0, i.e., the line x1 + 2x2 = 0.
To find the sliding dynamics, we first determine the equivalent control ueq . On
the sliding set we have

σ̇ = ẋ1 + 2ẋ2 = x2 + ueq + 2x1 = 0

and therefore

ueq = −2x1 − x2

The sliding dynamics is

ẋ1 = x2 + ueq = −2x1


ẋ2 = x1

6. Consider the system


d3 z d2 z dz 1
3
+ 2 + = − z3
dt dt dt 3
a. This can be seen as a negative feedback connection between a linear system
P (s) and a static nonlinearity f (x) = 13 x3 . Determine P (s). (2 p)

b. Calculate the describing function of the nonlinearity f (x) = 31 x3 . (2 p)


R 2π 3π
(Hint: 0 sin(x)4 dx = 4 )

c. Analyze the existence, amplitude and frequency of a possible limit cycle.


(2 p)

Solution

a. The Laplace transform between −f and z results in

1
P (s) = .
s(s2 + s + 1)

b. The function is odd, which implies that it is real.

A3 A3
Z 2π
b1 = sin(φ)4 dφ = ,
3π 0 4

which gives that the describing function

A2
N (A) = .
4

7
c. We want to find out the points where ImP (iω) = 0. Some calculations gives
that
−(1 − ω 2 )
ImP (iω) = ,
ω((1 − ω 2 )2 + ω 2 )
which in its turn gives that ω = 1. Finally, this yields that
1 4
P (i) = −1 = − = − 2 ⇒ A = 2.
N (A) A

To conclude: The frequency of the limit cycle is ω = 1 rad/s and its amplitude
is A = 2.

7. Isabelle and Antonio work as control engineers in a toy factory. This year, they
aim to improve the number of toys produced per year, and have encountered
the following optimal control problem as a main challenge.

Z 1
Minimize 4u(t)2 dt + 6x(1)2
0

subject to

ẋ = u
x(0) = 1

Unfortunately, they are not very experienced in optimal control, and the rest
of the staff are on Winter vacation. They kindly ask you for help, since they
know that you have studied nonlinear control. Derive the optimal control law
for the problem above.
(3 p)

Solution
We have a fixed final time, tf = 1. Further,
Φ(x(tf )) = 6x(tf )2
L = 4u2
f (x, u) = u
Hamiltonian:

H = L + λf = 4u2 + λu

Adjoint equation:

λ̇ = −Hx = 0
λ(tf ) = Φx (x(tf )) = 12x(tf )

Therefore, it can be noted that λ is constant, λ(t) = 12x(tf ).


Optimality conditions:
Minimizing H with respect to u gives
λ
Hu = 8u + λ = 0 ⇔ u = −
8

8
Hence, u = − 23 x(tf ).
The last step is to insert this into the system equation, which yields
Z tf
3 3 3 2
ẋ = − x(tf ) ⇔ x(tf ) − x(0) = − x(tf ) dt ⇔ x(1) − 1 = − x(1) ⇔ x(1) =
2 0 2 2 5

Hence, the optimal control law is u = − 32 x(tf ) = − 35

You might also like