0% found this document useful (0 votes)
39 views

Monte Carlo Optimization Procedure For Chance Constrained Programming - Simulation Study Results

This document summarizes a study on using a Monte Carlo optimization procedure to solve chance constrained linear programming problems with random right-hand sides. The study considers problems where the parameters describing the problem (matrix A and vectors b, c) are random variables. It proposes using an index of performance that considers both the probability of satisfying constraints and expected utility of the objective function value. The index is estimated using Monte Carlo simulation, which allows analyzing the statistical performance of different decision rules. The performance of solutions found through Monte Carlo optimization are compared to solutions of the equivalent deterministic problem.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Monte Carlo Optimization Procedure For Chance Constrained Programming - Simulation Study Results

This document summarizes a study on using a Monte Carlo optimization procedure to solve chance constrained linear programming problems with random right-hand sides. The study considers problems where the parameters describing the problem (matrix A and vectors b, c) are random variables. It proposes using an index of performance that considers both the probability of satisfying constraints and expected utility of the objective function value. The index is estimated using Monte Carlo simulation, which allows analyzing the statistical performance of different decision rules. The performance of solutions found through Monte Carlo optimization are compared to solutions of the equivalent deterministic problem.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Please cite this article as:

Andrzej Grzybowski, Monte Carlo optimization procedure for chance constrained programming - simulation study
results, Scientific Research of the Institute of Mathematics and Computer Science, 2009, Volume 8, Issue 1, pages
39-46.
The website: http://www.amcm.pcz.pl/
Scientific Research of the Institute of Mathematics and Computer Science

MONTE CARLO OPTIMIZATION PROCEDURE


FOR CHANCE CONSTRAINED PROGRAMMING
- SIMULATION STUDY RESULTS

Andrzej Grzybowski
Institute of Mathematics, Czestochowa University of Technology, Poland
[email protected]

Abstract. In the paper a chance constrained linear programming problem is considered


in the case of join chance constraints with random right hand sides. It is assumed that due to
its complex stochastic nature the problem cannot be reduced to any deterministic equivalent
problem. In such a case a Monte Carlo method involving evolutionary algorithms with soft
selection are proposed to solve the problem. The simulation results are presented and
discussed.

Introduction

Chance Constrained Programming (CCP) or, more general, stochastic programm-


ing deals with a class of optimization models and algorithms in which some of the
data may be subject to significant uncertainty. Such models are appropriate when
data cannot be observed without error or evolve over time and decisions have to be
made prior to observing the entire data stream. The concept of CCP was introduced
in the classical work of Charnes and Cooper [1]. Now CCP belongs to the major
approaches for dealing with random parameters in optimization problems. Typical
areas of application are engineering design applications, see [2], finance (e.g.[3]),
budgeting [4, 5] or portfolio analysis [6]. In models built for such real-world prob-
lems uncertainties like product demand, cost of supply, price of a final product,
demographic conditions, currency exchange rates, rates of return etc. enter the
inequalities describing the natural constraints that should be satisfied for proper
working of a system under consideration.
Stochastic optimization problems belong to the most difficult problems of ma-
thematical programming. Most of the existing computational methods are applica-
ble only to convex problems. There are, however, many important applied optimi-
zation problems which are, at the same time, stochastic and non-convex. Many of
them are also multi-extremal problems. Discussion of various computational aspect
of CCP problems can be found in [6-8] or [9]. In our paper the method of evolu-
tionary search with soft selection is proposed in order to find a satisfactory solution.
The statistical performance of such a solution is studied via computer simulations.
40 A. Grzybowski

1. Chance Constrained Linear Programming Problem

Let us consider a classical (deterministic) linear programming problem:

maximize f(x1,,xn) = c1x1+c2x2+ +cnxn

Subject to the following constrains (s.t.):

ai1x1+ai2x2+ +ainxn bi i = 1,,m


x10,,xn0

where f is the objective function, x = [x1, x2, ,xn]T is the decision variable vector,
A = [aij]mxn is the matrix of (technical) coefficients of the system of linear inequali-
ties, a coefficient vector b = [b1, b2,,bm]T will be addressed as a right hand side
of the constraints system, c = [c1, c2,,cn ]T is a vector of the objective function
coefficients.
As we have mention before, in many applications the elements of the tuple (A,b,c)
cannot be considered as known constants. All or part of them are uncertain. Thus
it is difficult or even impossible to know which solution will appear to be feasible.
In such circumstances, one would rather insist on decisions guaranteeing feasibili-
ty 'as much as possible'. This loose term refers to the fact that constraint violation
can almost never be avoided because of unexpected (or simply random) events. On
the other hand, after proper estimating the distribution of the random parameters,
it makes sense to call decisions feasible (in a stochastic meaning) whenever they
are feasible with high probability, i.e., only a low percentage of realizations of the
random parameters leads to constraint violation under this given decision. It leads
to CCP formulation of the problem, where the deterministic constraint are replaced
with a probabilistic or chance ones in one of the two following ways:

1.1. Individual chance constraints

maximize Ef(x1,,xn) = E(c1)x1+E(c2)x2++E(cn)xn


s.t.
Pr(ai1x1+ai2x2++ainxn bi ) qi i = 1,,m
x10,, xn0

where q = [q1, q2,,qm], qi [0,1], i = 1,,m, is the vector of prescribed values


of so called probability levels given for each constraint separately.

1.2. Joint chance constraints

maximize Ef(x1, , xn) = E(c1)x1+E(c2)x2+ +E(cn)xn


s.t.
Monte Carlo optimization procedure for chance constrained programming - simulation study results 41

Pr(ai1x1+ai2x2++ainxn bi, i = 1,,m) q


x10, , xn0

where q [0,1] is the probability level.


The value of probability level(s) is chosen by the decision maker in order to
model the safety requirements. Sometimes, the probability level is strictly fixed
from the very beginning (e.g., q = 0.95,0.99 etc.). In other situations, the decision
maker may only have a vague idea of a properly chosen level. It is obvious that
higher values of q lead to fewer feasible decisions x, and hence to smaller optimal
values of expected gain. In some simple cases (especially in case of individual
chance constraints) the problem can be replaced with its deterministic equivalent,
see e.g. [4, 9].
The main challenge in designing algorithms for general stochastic programming
problems arises from the need to calculate conditional expectation and/or probabil-
ity associated with multi-dimensional random variables, see [9, 8, 11]. This make
the CCP problems most difficult problems of mathematical programming. The com-
putational challenges and methods in the field of optimization under uncertainty are
addressed e.g. in [8] and [9]. A brief survey on some of the most relevant develop-
ments can be found in [7].
In our paper we consider the situation where , due to assumed complex stochas-
tic nature of the problem no deterministic equivalent is available. In order to find
satisfactory stochastically feasible solution we propose a criterion based on expec-
ted utility of a given solution and adopt an algorithm of evolutionary search with
soft selection.

2. Problem formulation

In our studies we examine the linear programming models in the case where all
problem describing parameters i.e. the matrix A and vectors b, c, are random with
the expectations equal E(A) = , E(b) = , E(c) = . In the sequel such a problem
will be denoted CCLP( , , ). The performance of the solution found by the
Monte Carlo method for the CCLP( , , ) is compared with the performance of
the optimal solution found for the deterministic linear programming problem given
by the parameters , , - in the sequel the latter will be denoted by DLP(, , ).
The decision-maker dealing with the CCLP( , , ) problem should maximize
both the probability q that a given systems of constraints will be satisfied and the
expected value of the objective function. However, the goals appear to be contra-
dictory (at least to some extent) and therefore in our studies we propose to use the
following index of performance of a given solution:

IP(x) = pxU(E(f(x)))
42 A. Grzybowski

where px is the probability that the system of constraints is satisfied when one use
the solution x, U is a utility function which allow to take into account the decision
maker preferences connected with both goals. The index IP can be interpreted as
a conditional expected utility of the expected value of the objective function, under
the condition that the system of constraints is satisfied. We assume that if the con-
dition is not fulfilled then the utility of any gain equals zero.
The Monte Carlo method - generally speaking - is a numerical method based on
random sampling. It is therefore a method which allow to analyze a given decision
rule in terms of its statistical performance. Thus in our studies the above index of
performance takes the following statistical form:

Ns 1 N

SIP(x) = U
N SIP N
f i (x)
i =1
where NSIP is a number of i.i.d. realizations of CCLP( , , ), Ns is the number of
successful realizations (i.e. the realizations for which the system of constraints was
satisfied), fi(x) is the value of the objective function obtained in the i-th random
realization of the problem. A single random realization of CCLP( , , ) is the
realization of a random tuple (A,b,c) with the probability distribution satisfying
the condition E(A) = , E(b) = , E(c) = .
In our simulation study we take into account the problems with positive objective
functions and we use the utility function given by:

U ( y) = y 1[ 0, ) ( y )

3. Algorithm of the evolutionary search with soft selection (ES-SS)

To find the solution to the CCLP( , , ) problem we adopt the algorithm of


the evolutionary search with soft selection described e.g in [10, 12]. The algorithm
implemented in our simulations is as follows:
Step 1. Set the initial population of k vectors vi Rn, i = 1,2,,k (it is so called
initial parent population).
Step 2. Assign to each vector vi , i = 1,,k, its fitness i.e. the value of the criterion
SIP(vi ).
Step 3. Select parent v by soft selection i.e. with probability proportional to the its
fitness.
Step 4. Create a descendant w from the chosen parent v by its random mutation:
w = v + Z, where Z is a random n-dimensional vector with coefficients hav-
ing expected value equal to zero and given standard deviation z.
Step 5. Repeat steps 3 and 4 for k times to create a new k-element generation of
n-dimensional vectors (descendants).
Step 6. Replace the parent population with the descendant population
Monte Carlo optimization procedure for chance constrained programming - simulation study results 43

Step 7. Repeat the second to sixth steps for NG times, where NG is a sufficiently
large number.
Step 8. Return the last generation and the fitness of its elements.
In our simulations the initial population in Step 1 was generated as a population
of mutations of the optimal solution of the DLP( , , ) problem. We also record
the value of the best fitness achieved during the simulations and the vector v the
best value was assign to. The latter is the solution of CCLP( , , ) problem found
by the evolutionary search - it is denoted by xES. The optimal solution for the
DLP( , , ) problem is denoted by xD .

4. Simulation study of the solutions performance

To estimate the performance of the solution xES we compare it with the perfor-
mance with optimal deterministic solution xD in two frameworks: stochastic and
deterministic (ideal). To do this we propose the following performance indicators:
SIP (x ES )
the statistical performance rate: SPR =
SIP (x D )
the rate between the SIP(xES) and the utility of the optimal objective function
SIP (x ES )
value in deterministic case: SDR =
U ( f (x D ))
The values of the indicators obviously depend on the problem parameters, i.e.
, , ) and the dimensions of its elements. Thus we compute the va-
on the tuple (
lues of the indicators for various values of the parameters and compare its statistical
characteristics: maximum value, minimum value, median, mean value and standard
deviation. In order to obtain the statistical data we use the following simulation
procedure.
Step 1. Set the parameters n, k, NSIP, NG and K.
Step 2. Randomly generate the tuple ( , , ).
Step 3. Solve the DLP( , , ) problem by the simplex algorithm and obtain the
solution xD , the optimal value f(xD) and SIP(xD)
Step 4. Solve CCLP( , , ) problem by the ES-SS algorithm and obtain the solu-
tion xES and SIP(xES ).
Step 5. Compute and record the values of SPR and SDR for this setup.
Step 7. Repeat the second to fifth steps for K times, where K is a sufficiently large
number.
Step 8. Return: maximum values, minimum values, medians, mean values and stan-
dard deviations of SPR and SDR.
In our research we use the following values of the parameters: NSIP = 500, n = 5,
10,15 and k = 10, K = 50. In the ES-SS algorithm the distributions of mutations are
normal with constant standard deviation equal to 0.1. The values of the parameter
m are drawn from the set {n2,...,n+5}. The elements of the tuple ( , , ) are
44 A. Grzybowski

drawn from the interval [200,700]. In the SIP procedure the distributions of each
element of the matrix and both vectors are normal with mean equal zero and stan-
dard deviation being equal to 10% of the value of the element.

5. Results and final remarks

In the following tables we present the results of our simulations. The first one
shows the results obtained for the indicator SPR in case of n = 5,10,15.

Table 1. Statistical characteristics of SPR, n = 5,10,15


SPR, n = 5
Min Max Median Mean Stand. Dev.
3.49 16.11 6.51 7.77 3.95
SPR, n = 10
Min Max Median Mean Stand. Dev.
3.07 216.87 19.28 42.50 53.96
SPR, n = 15
Min Max Median Mean Stand. Dev.
6.94 98.36 66.95 55.56 36.64

We can see that the performance of the solution xES in comparison with the per-
formance of the deterministic optimal solution xD is very good. The expected utility
of xES is at least (see first column) three times greater than the expected utility of
xD . The ratio of the expected utilities may be even greater than 200, see the second
column of the Table 1. In average, the greater is the number of the decision variables
n, the greater is the ratio, see columns fourth and fifth.
Another question is how big is the expected utility of the mean value of the ob-
jective function when we use the solution xES in comparison with the utility of the
optimal value of the objective function in the ideal, deterministic case. The answer
is given by the values of the indicator SDR presented in Table 2.

Table 2. Statistical characteristics of SDR, n = 5,10,15


SDR, n = 5
Min Max Median Mean Stand. Dev.
63% 90% 87% 85% 6.0%
Number of best generation, n = 10
Min Max Median Mean Stand. Dev.
23% 85% 80% 77% 14%
Number of best generation, n = 15
Min Max Median Mean Stand. Dev.
77% 86% 80% 81% 28%
Monte Carlo optimization procedure for chance constrained programming - simulation study results 45

We see that the average expected utility amounts to about 80% of the ideal opti-
mal value. Because the standard deviations are rather small it indicates a very good
performance of the solution xES . Obtained values of the index SDR show that any
other algorithms cannot gain much better performance in the stochastic framework.
The last question addressed in this paper is how many generation should be crea-
ted to obtain a satisfactory solution. Table 3 provides us with the statistical charac-
teristics of the number of a generation containing best element in our simulations.

Table 3. Number of the best generation - statistical characteristics, n = 5,10,15

Number of best generation, n = 5


Min Max Median Mean Stand. Dev.
2 99 23 36.85 33.69
Number of best generation, n = 10
Min Max Median Mean Stand. Dev.
1 52 17 21.64 15.53
Number of best generation, n = 15
Min Max Median Mean Stand. Dev.
1 67 36 36,16 20.65

The first two columns of Table 3 contain the minimal and maximal number of
generations needed to obtain the element with highest value of expected utility. We
see that a hundred of generation was always enough - 99 was the maximal value.
The median and mean of these numbers show that - in average - forty generations
is sufficient to obtain satisfactory solution for considered CCLP problems.

Final remarks

The solution for CCLP( , , ) problem found by the evolutionary search may
be called satisfactory rather than optimal. The optimality cannot be proved and we
even dont believe that it is optimal. But the solution is relatively easy to find and,
in considered stochastic framework, much better than the optimal solution found in
deterministic case. The improvement, measured in terms of average value of statis-
tical performance rate SPR amounts to about 7 (for n = 5) or even to about 60 (for
n = 15). The solution may be considered as satisfactory especially because of the
high values of the indicator SDR: its average values are about 80%. Taking into
account that the standard deviations of random variables disturbing all elements of
the tuple ( , , ) amounts to 10% of its original values, one should not expect
much more , even when applying more sophisticated solutions.
46 A. Grzybowski

References
[1] Charnes A., Cooper W.W., Chance-constrained programming, Management Sciences 1959, 6,
73-80.
[2] Taflanidis T.T., Beckan J.L., Efficient framework for optimal robust stochastic system design
using stochastic simulation, Comput. Methods Appl. Mech. Engrg. 2008, 198, 88-101.
[3] Steuer R., Na P., Multiple criteria decision making combined with finance: A categorized biblio-
graphic study, European Journal of Operational Research 2003, 150, 496-515.
[4] Dima I.C., Grzybowski A., Modele optymalizacyjne w problemach budetowania w warunkach
niepewnoci i/lub ryzyka, [in:] Budetowanie, I.C. Dima, J. Grabara (eds.), Prace Naukowe WZ
PCz, seria monografie Nr 19, Czstochowa 2009, 153-166.
[5] De P.K., Acharya D., Sahu K.C., A chance-constrained goal programming model for capital
budgeting, Journal of the Operational Research Society 1982, 33(7), 635-638.
[6] Norkin V.I., Pug G.C., Ruszczyski A., A Branch and Bound Method for Stochastic Global
Optimization, Mathematical Programming: Series A and B 1998, 83, 3, 425-450.
[7] Schuller G.I., Jensen H.A., Computational methods in optimization considering uncertainties -
An overview, Comput. Methods Appl. Mech. Engrg. 2008, 198, 2-13.
[8] Sen S., Stochastic Programming: Computational Issues and Challenges, [in:] Encyclopedia
of Operations Research and Management Science, 2nd edition, ed. S. Gass, C. Harris, Kluwer
Academic Publishers, Boston 2001, 821-827.
[9] Ruszczynski A., Shapiro A., Stochastic Programming, Handbooks in Operations Research and
Management Science, Vol. 10. Elsevier, Amsterdam 2003.
[10] Galar R., Soft selection in random global adaptation in Rn: A biocybernetic model of develop-
ment, Technical University Press, Wrocaw 1990 (in Polish).
[11] Sen S., Higle J.L., An Introductory Tutorial on Stochastic Linear Programming Models, Inter-
faces 1999, 29, 33-61.
[12] Obuchowicz A., Korbicz J., Global optimization via evolutionary search with soft selection,
Manuscript, available: http://www.mat.univie.ac.at/~neum/glopt/mss/gloptpapers.html

You might also like