0% found this document useful (0 votes)
99 views

Chap2 Lec5 Unconstrained Geometric Programming

This document provides notes on unconstrained geometric programming. It defines a posynomial as a particular type of function to be minimized over a convex set. The A-G inequality is used to relate the primal geometric program of minimizing this function to the dual geometric program of maximizing a related function. Solving the dual program provides both the minimum value and the solution to the primal program through a system of linear equations. An example problem is worked through to demonstrate the full procedure.

Uploaded by

Uzair Aslam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views

Chap2 Lec5 Unconstrained Geometric Programming

This document provides notes on unconstrained geometric programming. It defines a posynomial as a particular type of function to be minimized over a convex set. The A-G inequality is used to relate the primal geometric program of minimizing this function to the dual geometric program of maximizing a related function. Solving the dual program provides both the minimum value and the solution to the primal program through a system of linear equations. An example problem is worked through to demonstrate the full procedure.

Uploaded by

Uzair Aslam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Jim Lambers

MAT 419/519
Summer Session 2011-12
Lecture 8 Notes

These notes correspond to Section 2.5 in the text.

Unconstrained Geometric Programming


Previously, we learned how to use the A-G Inequality to solve an unconstrained minimization
problem. We now formalize the procedure for solving such problems, in the case where the objective
function to be minimized has the following particular form.
Definition Let D ⊆ Rm be the (convex) subset of Rm defined by

D = {(t1 , t2 , . . . , tm ) ∈ Rm | tj > 0, j = 1, 2, . . . , m}.

A function g : D → Rm of the form


n m
α
X Y
g(t) = ci tj ij ,
i=1 j=1

where ci > 0 for i = 1, 2, . . . , n and αij ∈ R for i = 1, 2, . . . , n, j = 1, 2, . . . , m, is called a posynomial.


We now investigate how the A-G Inequality can be used to find the minimum of a given posyn-
omial g(t) on D, if one exists. That is, we will be solving the primal geometric program (GP)
αij
Minimize g(t) = ni=1 ci m
P Q
j=1 tj
subject to t1 , t2 , . . . , tm > 0.

We denote the ith term of g(t) by


m
Y α gi (t)
gi (t) = ci tj ij , xi = , i = 1, 2, . . . , n,
δi
j=1

where δ1 , δ2 , . . . , δn > 0 and


n
X
δi = 1.
i=1
Then, the A-G Inequality yields
n
X
g(t) = δ i xi
i=1

1
n
Y
≥ xδi i
i=1
 δi
n
n   δi Y m
Y ci Y α
≥  tj ij 
δi
i=1 i=1 j=1
 
n
n   δi Y m
Y ci Y α δ
≥  tj ij i 
δi
i=1 i=1 j=1
 
n   δi m
Y ci Y Pn
αij δi 
≥  tj i=1
.
δi
i=1 j=1

Because this is an unconstrained minimization problem, we need the quantity on the low side of
the A-G Inequality to be a constant. It follows that the exponents δi must satisfy
n
X
αij δi = 0, j = 1, 2, . . . , m.
i=1

If a vector δ = (δ1 , δ2 , . . . , δn ) can be found that satisfies this condition, as well as the previous
conditions we have imposed on the δi ’s, then we have a candidate for a solution to the dual geometric
program (DGP)
Q n  c i δ i
Maximize v(δ) = i=1 δi
subject to δP
1 , δ2 , . . . , δn > 0 (Positivity Condition)
n
Pi=1 δ i = 1 (Normality Condition)
n
i=1 αij δi = 0, j = 1, 2, . . . , m (Orthogonality Condition).

A vector δ that satisfies all three of the above conditions is said to be a feasible vector of the DGP.
By the A-G Inequality, we have, for each feasible vector δ,

g(t) ≥ v(δ), t ∈ D.

This inequality is known as the Primal-Dual Inequality.


If, in addition, δ ∗ is a global maximizer of v(δ), and is therefore a solution of the DGP, then

v(δ ) is at least a lower bound for the minimum value of g(t) on D. It can be shown using the
criterion ∇g(t) = 0 that if t∗ is a global minimizer of g(t) on D, and therefore is a solution of the
GP, then
g(t∗ ) = v(δ ∗ )
for some feasible vector δ ∗ . That is, the Primal-Dual Inequality actually becomes an equality.

2
Therefore, if δ ∗ is a solution of the DGP, then, by the A-G Inequality, the solution to the GP
t∗ = (t∗1 , t∗2 , . . . , t∗m ) can be found from the relations

x1 = x2 = · · · = xn = v(δ ∗ ),

or
gi (t∗ ) = v(δ ∗ )δi∗ , i = 1, 2, . . . , n.
Note that the δi ’s indicate the relative contributions of each term gi (t∗ ) of g(t∗ ) to the minimum.
By taking the natural logarithm of both sides of these equations, we obtain a system of linear
equations for the unknowns zj = ln t∗j , j = 1, 2, . . . , m. Specifically, we can solve
m
v(δ ∗ )δi∗
X  
αij zj = ln , i = 1, 2, . . . , n.
ci
j=1

Exponentiating the zj ’s yields the components t∗j , j = 1, 2, . . . , m, of the minimizer t∗ .


This leads to the following method for solving the GP, known as Unconstrained Geometric
Programming:
1. Find all feasible vectors δ for the corresponding DGP.
2. If no feasible vectors can be found, then the DGP, and therefore the GP, have no solution.
3. Compute the value of v(δ) for each feasible vector δ. Each vector δ ∗ that maximizes the value
of v(δ) is a solution to the DGP.
4. To obtain the solution t∗ to the GP, solve the system of equations

gi (t∗ ) = v(δ ∗ )δi∗ , i = 1, 2, . . . , n

for t∗1 , t∗2 , . . . , t∗m , which can be reduced to a system of linear equations as described above.

Example We will solve the GP


Minimize g(t) = t1 t12 t3 + 2t2 t3 + 3t1 t3 + 4t1 t2
subject to t1 , t2 , t3 > 0.
This leads to the DGP
 δ 1  δ 2  δ 3   δ 4
1 2 3 4
Maximize v(δ) = δ1 δ2 δ3 δ4
subject to δ1 , δ2 , δ3 , δ4 > 0 (Positivity Condition)
δ1 + δ2 + δ3 + δ4 = 1 (Normality Condition)
−δ1 + δ3 + δ4 = 0,
−δ1 + δ2 + δ4 = 0, (Orthogonality Condition)
−δ1 + δ2 + δ3 = 0

3
The Normality Condition and Orthogonality Condition, together, form a system of 4 equations
with 4 unknowns whose coefficient matrix is nonsingular, so the system has the unique solution
 
∗ 2 1 1 1
δ = , , , ,
5 5 5 5
which also satisfies the Positivity Condition, so it is feasible. As it is the only feasible vector for
the DGP, it is also the solution to the DGP.
It follows that the maximum value of v(δ), which is also the minimum value of g(t), is
 2/5
∗ 5
v(δ ) = 101/5 151/5 201/5 ≈ 7.155.
2
To find the minimizer t∗ , we can solve the equations
1 2
= δ1 v(δ ∗ ) = v(δ ∗ )
t∗1 t∗2 t∗3 5
1
2t∗2 t∗3 = δ2 v(δ ∗ ) = v(δ ∗ )
5
1
3t∗1 t∗3 = δ3 v(δ ∗ ) = v(δ ∗ )
5
1
4t∗1 t∗2 = δ4 v(δ ) = v(δ ∗ ).

5
From these equations, we obtain the relations
1 2
3t∗3 = 4t∗2 , 2t∗3 = 4t∗1 , 2t∗2 = 3t∗1 , ∗ 3
= v(δ ∗ )
3(t1 ) 5
which yields the solutions
s
5
t∗1 = 3
,
6v(δ ∗ )
s
33 5
t∗2 = ,
2 6v(δ ∗ )
s
5
t∗3 = 23 .
6v(δ ∗ )

Substituting these values into g(t) yields the value of v(δ ∗ ), as expected. 2
Example We now consider the GP
Minimize g(t) = t12t2 + t1 t2 + t1
subject to t1 , t2 > 0.

4
This leads to the DGP
 δ 1  δ 2   δ 3
2 1 1
Maximize v(δ) = δ1 δ2 δ3
subject to δ1 , δ2 , δ3 > 0 (Positivity Condition)
δ1 + δ2 + δ3 = 1 (Normality Condition)
−δ1 + δ2 + δ3 = 0,
−δ1 + δ2 = 0 (Orthogonality Condition)

Unfortunately, the only values of δ1 , δ2 , δ3 that satisfy the Normality Condition and the Orthogo-
nality Condition are δ1 = 12 , δ2 = 21 , δ3 = 0. These values do not satisfy the Positivity Condition,
so there are no feasible vectors for the DGP. We conclude that there is no solution to the GP. 2

Exercises
1. Chapter 2, Exercise 18

2. Chapter 2, Exercise 21

3. Chapter 2, Exercise 25

4. Chapter 2, Exercise 26

You might also like