0% found this document useful (0 votes)
30 views

Lec 1 - Introduction

This document provides an introduction to convex optimization theory. It defines optimization problems and gives examples such as portfolio optimization. It explains that while general optimization is difficult, certain problem classes like least squares, linear programming, and convex optimization can be solved efficiently and reliably. The document outlines the goals and topics of convex optimization theory, provides a brief history of the field, and lists references for further reading.

Uploaded by

xahmed29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Lec 1 - Introduction

This document provides an introduction to convex optimization theory. It defines optimization problems and gives examples such as portfolio optimization. It explains that while general optimization is difficult, certain problem classes like least squares, linear programming, and convex optimization can be solved efficiently and reliably. The document outlines the goals and topics of convex optimization theory, provides a brief history of the field, and lists references for further reading.

Uploaded by

xahmed29
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Convex Optimization Theory

1. Introduction

By Dr. Islam Gamal


Outline

• What is optimization?
• Examples
• Factors that come into play
• Efficient/Reliable Problems
• Goals and Topics
• Berif History
• References
Optimization

You have a problem with some constraints, and you would like
to find the best solution or the optimal solution that solves the
problem and at the same time satisfies the constraints.
Mathematical Optimization

(mathematical) optimization problem

minimize f 0 (x)
subject to f i ( x ) ≤ bi, i = 1, . . . , m

• x = (x 1 , . . . , x n ): optimization variables

• f 0 : Rn → R: objective function

• f i : Rn → R, i = 1, . . . , m : constraint functions

optimal solution x ⋆ has smallest value of f 0 among all vectors that


satisfy the constraints
Examples
Portfolio optimization
• variables: amounts invested in different assets
• constraints: budget, max./min. investment per asset, minimum return
• objective: overall risk or return variance

Device sizing in electronic circuits


• variables: device widths and lengths
• constraints: manufacturing limits, timing requirements, maximum area
• objective: power consumption

Data fitting
• variables: model parameters
• constraints: prior information, parameter limits
• objective: measure of misfit or prediction error

Introduction 1–5
Solving Optimization Problems

General optimization problem

• very difficult to solve


• methods involve some compromise, e.g., very long computation time,
or not always finding the solution

Exceptions: certain problem classes can be solved efficiently and reliably

• least-squares problems
• linear programming problems
• convex optimization problems

Introduction 1–6
Least-squares

minimize ∥ A x − b. ∥2
2

solving least-squares problems

• analytical solution: x ⋆ = (A T A) − 1 A T b
• reliable and efficient algorithms and software
• computation time proportional to n 2 k (A ∈ R k×n ); less if structured
• a mature technology

using least-squares

• least-squares problems are easy to recognize


• a few standard techniques increase flexibility (e.g., including
weights, adding regularization terms)
Linear Programming

minimize cT x
subject to a Ti x ≤ bi , i = 1, . . . , m

solving linear programs


• no analytical formula for solution
• reliable and efficient algorithms and software
• computation time proportional to n 2 m if m ≥ n; less with structure
• a mature technology

using linear programming


• not as easy to recognize as least-squares problems
• a few standard tricks used to convert problems into linear programs
(e.g., problems involving ℓ1- or ℓ∞-norms, piecewise-linear functions)
Convex Optimization Problem

minimize f 0 (x)
subject to f i ( x ) ≤ bi, i = 1, . . . , m

• objective and constraint functions are convex:

f i ( α x + βy) ≤ α f i ( x ) + β f i (y )
if α + β = 1, α ≥ 0, β ≥ 0

• includes least-squares problems and linear programs as special


cases
Solving Convex Optimization Problems

• no analytical solution
• reliable and efficient algorithms
• computation time (roughly) proportional to max{n 3 , n 2 m , F }, where F
is cost of evaluating f i ’s and their first and second derivatives
• almost a technology

Using Convex Optimization

• often difficult to recognize


• many tricks for transforming problems into convex form
• surprisingly, many problems can be solved via convex
optimization
Course Goals And Topics

Goals

1. recognize/formulate problems (such as the illumination problem)


as convex optimization problems

2. develop code for problems of moderate size (1000 lamps, 5000


patches)
3. characterize optimal solution (optimal power distribution), give limits
of performance, etc.

Topics

1. convex sets, functions, optimization problems


2. examples and applications
3. algorithms
Nonlinear Optimization

traditional techniques for general nonconvex problems involve


compromises

local optimization methods (nonlinear programming)


• find a point that minimizes f 0 among feasible points near it
• fast, can handle large problems
• require initial guess
• provide no information about distance to (global) optimum

global optimization methods


• find the (global) solution
• worst-case complexity grows exponentially with problem size

these algorithms are often based on solving convex subproblems


Brief history of convex optimization
theory (convex analysis): ca1900–1970

algorithms
• 1947: simplex algorithm for linear programming (Dantzig)
• 1960s: early interior-point methods (Fiacco & McCormick, Dikin, . . . )
• 1970s: ellipsoid method and other subgradient methods
• 1980s: polynomial-time interior-point methods for linear programming
(Karmarkar 1984)
• late 1980s–now: polynomial-time interior-point methods for nonlinear
convex optimization (Nesterov & Nemirovski 1994)

applications
• before 1990: mostly in operations research; few in engineering
• since 1990: many new applications in engineering (control, signal
processing, communications, circuit design, . . . ); new problem classes
(semidefinite and second-order cone programming, robust optimization)
References

You might also like