0% found this document useful (0 votes)
56 views

Solution HW4

The document provides solutions to homework problems involving optimization of functions using various numerical methods. It gives the solutions for problems involving maximizing a function using the golden-section search, quadratic interpolation, and Newton's method. It also provides solutions applying the steepest ascent and Newton's methods to locate the maximum of a multivariate function. Finally, it uses the fminsearch function in MATLAB to find the maximum of the same multivariate function and compares the result to those from steepest ascent and Newton's methods.

Uploaded by

Alexander Igasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Solution HW4

The document provides solutions to homework problems involving optimization of functions using various numerical methods. It gives the solutions for problems involving maximizing a function using the golden-section search, quadratic interpolation, and Newton's method. It also provides solutions applying the steepest ascent and Newton's methods to locate the maximum of a multivariate function. Finally, it uses the fminsearch function in MATLAB to find the maximum of the same multivariate function and compares the result to those from steepest ascent and Newton's methods.

Uploaded by

Alexander Igasan
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Homework #4 Solution

13.2 Given
f ( x ) = −1.5 x 6 − 2 x 4 + 12 x
(a) Plot the function.
(b) Use analytical methods to prove that the function is concave for all values of x.
(c) Differentiate the function and then use a root-location method to solve for the
maximum f(x) and the corresponding value of x.

SOLUTION:

(a) Plot

(b) f ′′( x ) = −45 x 4 − 24 x 2 < 0 for all x, so the function is concave for all values of x.

(c) f ′(x ) = −9 x 5 − 8 x 3 + 12 = 0
Using bisection method, we obtain:
Root = 0.916915
So f (0.916915) = 8.69729

13.3 Solve for the value of x that maximizes f(x) in Prob. 13.2 using the golden-
section search. Employ initial guesses of xl = 0 and xu = 2 and perform three iterations.

SOLUTION:

Follow Example 13.1


For xl = 0, xu = 2, d = 0.618 × 2 = 1.236
X1 = 0 + 1.236 = 1.236, x2 = 2 – 1.236 = 0.764

The follow table can be generated:

Interation xl x2 x1 xu
1 0 0.764 1.236 2
2 0 0.472 0.764 1.236
3 0.472 0.764 0.944 1.236
4 0.764 0.994 1.056 1.236
5 0.764 0.875 0.944 1.056
. . . . .
. . . . .
. . . . .
11 0.9117 0.9180 0.9218 0.9280
With εa = 0.677% and xopt = 0.9179 and f(xopt) = 8.6979

13.4 Repeat Prob. 13.3, except use quadratic interpolation. Employ initial guesses of
x0 = 0, x1 = 1 and x2 = 2 and perform three iterations.

SOLUTION:

Follow Example 13.2


With x0 = 0, x1 = 1 and x2 = 2, gives x3 = 0.5702;
The follow table is generated:
I x0 x1 x2 x3
1 0 1 2 0.5702
2 0 0.5702 1 1.101
3 0.5702 1.101 1 0.8738
4 0.5702 0.8738 1.101 0.8802
5 0.8738 0.8802 1.101 0.9096
6 0.8802 0.9096 1.101 0.9126
7 0.9096 0.9126 1.101 0.9158
8 0.9126 0.9158 1.101 0.9164

Xopt = 0.9164, f(xopt) = 8.6979.

13.5 Repeat Prob. 13.3 but use Newton’s method. Employ an initial guess of x0 = 2
and perform three iterations.

SOLUTION:


fi
xi +1 = xi −

fi
f ( x ) = −1.5 x 6 − 2 x 4 + 12 x
f ′(x ) = −9 x 5 − 8 x 3 + 12
f ′′(x ) = −45 x 4 − 24 x 2

For x0 = 2
X1 = 1.5833
X2 = 1.2646
X3 = 1.0477
.
.
.

13.6 Discuss the advantages and disadvantages of golden-section search, quadratic


interpolation, and Newton’s method for locating an optimum value in one dimension.

SOLUTION:

Golden-section search is inefficient but always converges if xl and xu bracket the max
or min of a function.
Quadratic interpolation and Newton’s method may converge rapidly for well-behaved
functions and good initial values; otherwise they may diverge.
Newton’s method has the disadvantage that it requires the evaluation of f΄΄.

14.6 Perform one iteration of the steepest ascent method to locate the maximum of
f ( x, y ) = 3.5 x + 2 y + x 2 − x 4 − 2 xy − y 2
using initial guesses x = 0 and y = 0. Employ bisection to find the optimal step size in
the gradient search direction.

SOLUTION:

∂f ∂f
= 3.5 + 2 x − 4 x 3 − 2 y , = 2 − 2x − 2 y
∂x ∂y

at x = 0 and y = 0
∂f ∂f
= 3.5 , =2
∂x ∂y
f (0 + 3.5h,0 + 2h ) = 3.5 2 h + 4h + 3.5 2 h 2 − 3.5 4 h 4 − 2(3.5)2h 2 − 4h 2
∂f
= 0 = 16.25 − 11.5h − 600.25h 3
∂h
Using bisection gives: h ∗ = 0.279

∴ x1 = 0 + 0.279(3.5) = 0.9765 , y1 = 0 + 0.279(2) = 0.558

Using the function from problem 14.6, and the starting point of (0, 0), perform one
iteration of Newton’s method and compute the function value.

SOLUTION:

f ( x, y ) = 3.5 x + 2 y + x 2 − x 4 − 2 xy − y 2
∂f ∂f
= 3.5 + 2 x − 4 x 3 − 2 y , = 2 − 2x − 2 y
∂x ∂y
∂f (0,0 ) ∂f (0,0 )
= 3.5 , =2
∂x ∂y
⎛ ∂2 f ⎞ ∂2 f
⎜⎜ 2 (
⎟⎟ = 2 x − 12 x 2 )x =0 = 2,
∂2 f
= − 2 , = −2
⎝ ∂x ⎠ x =0 ∂y 2 ∂x∂y
So
2 −2
Hi =
−2 −2
∇ f = ∇f ( x i ) + H i ( x − x i ) = 0

⎡3.5⎤ ⎡ 2 − 2⎤ ⎡ x − 0 ⎤
⎢ 2 ⎥ + ⎢ − 2 − 2⎥ ⎢ y − 0⎥ = 0
⎣ ⎦ ⎣ ⎦⎣ ⎦
⎡ x ⎤ ⎡− 0.375⎤
So ⎢ ⎥ = ⎢ ⎥
⎣ y ⎦ ⎣ 1.3750 ⎦

⎡− 0.375⎤ ⎡ 2 − 2⎤ ⎡ − 0.375⎤
f ( x, y ) = f ( x1 , y1 ) + [3.5 2]⎢ ⎥ + 0.5[− 0.375 1.375]⎢ ⎥⎢ ⎥
⎣ 1.375 ⎦ ⎣− 2 − 2⎦ ⎣ 1.375 ⎦
⎡ 2 − 2 ⎤ ⎡ − 3. 5⎤
+ 0.5[− 0.375 1.375]⎢ ⎥⎢ ⎥
⎣− 2 − 2⎦ ⎣ 2 ⎦
f ( x, y ) = −0.7188

−1
xi +1 = xi − H i ∇f
⎡ xi +1 ⎤ ⎡0⎤ ⎡ 0.25 − 0.25⎤ ⎡3.5⎤ ⎡− 0.375⎤
⎢ y ⎥ = ⎢0⎥ − ⎢− 0.25 − 0.25⎥ ⎢ 2 ⎥ = ⎢ 1.375 ⎥
⎣ i +1 ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦

Using the function fminsearch from Matlab, find the maximum of the function from
14.6. Be sure to negate the function when you define the M-file since fminsearch
finds minima. To define the function choose File/New and type the function in as in
the examples in the book on pages 394 and 395, then save the file with the default
name. Compare the funciton value at the optimum to the function values from the first
iterations of steepest ascent and Newton’s method.

SOLUTION:

M-file:
function f=fxy(x)
f=-(3.5*x(1)+2*x(2)+x(1)^2-x(1)^4-2*x(1)*x(2)-x(2)^2)

Results:
>> x=fminsearch ('fxy',[0,0])

f=

-3.6210

x=

1.1514 -0.1513

>> x=fminsearch ('fxy',[0,0],optimset('MaxIter',1))

f=

f=

-8.7506e-004
f=

-4.9994e-004

Exiting: Maximum number of iterations has been exceeded


- increase MaxIter option.
Current function value: -0.000875

x=

1.0e-003 *

0.2500 0

You might also like