Monte Carlo Optimization Procedure For Chance Constrained Programming - Simulation Study Results
Monte Carlo Optimization Procedure For Chance Constrained Programming - Simulation Study Results
Andrzej Grzybowski, Monte Carlo optimization procedure for chance constrained programming - simulation study
results, Scientific Research of the Institute of Mathematics and Computer Science, 2009, Volume 8, Issue 1, pages
39-46.
The website: http://www.amcm.pcz.pl/
Scientific Research of the Institute of Mathematics and Computer Science
Andrzej Grzybowski
Institute of Mathematics, Czestochowa University of Technology, Poland
[email protected]
Introduction
where f is the objective function, x = [x1, x2, ,xn]T is the decision variable vector,
A = [aij]mxn is the matrix of (technical) coefficients of the system of linear inequali-
ties, a coefficient vector b = [b1, b2,,bm]T will be addressed as a right hand side
of the constraints system, c = [c1, c2,,cn ]T is a vector of the objective function
coefficients.
As we have mention before, in many applications the elements of the tuple (A,b,c)
cannot be considered as known constants. All or part of them are uncertain. Thus
it is difficult or even impossible to know which solution will appear to be feasible.
In such circumstances, one would rather insist on decisions guaranteeing feasibili-
ty 'as much as possible'. This loose term refers to the fact that constraint violation
can almost never be avoided because of unexpected (or simply random) events. On
the other hand, after proper estimating the distribution of the random parameters,
it makes sense to call decisions feasible (in a stochastic meaning) whenever they
are feasible with high probability, i.e., only a low percentage of realizations of the
random parameters leads to constraint violation under this given decision. It leads
to CCP formulation of the problem, where the deterministic constraint are replaced
with a probabilistic or chance ones in one of the two following ways:
2. Problem formulation
In our studies we examine the linear programming models in the case where all
problem describing parameters i.e. the matrix A and vectors b, c, are random with
the expectations equal E(A) = , E(b) = , E(c) = . In the sequel such a problem
will be denoted CCLP( , , ). The performance of the solution found by the
Monte Carlo method for the CCLP( , , ) is compared with the performance of
the optimal solution found for the deterministic linear programming problem given
by the parameters , , - in the sequel the latter will be denoted by DLP(, , ).
The decision-maker dealing with the CCLP( , , ) problem should maximize
both the probability q that a given systems of constraints will be satisfied and the
expected value of the objective function. However, the goals appear to be contra-
dictory (at least to some extent) and therefore in our studies we propose to use the
following index of performance of a given solution:
IP(x) = pxU(E(f(x)))
42 A. Grzybowski
where px is the probability that the system of constraints is satisfied when one use
the solution x, U is a utility function which allow to take into account the decision
maker preferences connected with both goals. The index IP can be interpreted as
a conditional expected utility of the expected value of the objective function, under
the condition that the system of constraints is satisfied. We assume that if the con-
dition is not fulfilled then the utility of any gain equals zero.
The Monte Carlo method - generally speaking - is a numerical method based on
random sampling. It is therefore a method which allow to analyze a given decision
rule in terms of its statistical performance. Thus in our studies the above index of
performance takes the following statistical form:
Ns 1 N
SIP(x) = U
N SIP N
f i (x)
i =1
where NSIP is a number of i.i.d. realizations of CCLP( , , ), Ns is the number of
successful realizations (i.e. the realizations for which the system of constraints was
satisfied), fi(x) is the value of the objective function obtained in the i-th random
realization of the problem. A single random realization of CCLP( , , ) is the
realization of a random tuple (A,b,c) with the probability distribution satisfying
the condition E(A) = , E(b) = , E(c) = .
In our simulation study we take into account the problems with positive objective
functions and we use the utility function given by:
U ( y) = y 1[ 0, ) ( y )
Step 7. Repeat the second to sixth steps for NG times, where NG is a sufficiently
large number.
Step 8. Return the last generation and the fitness of its elements.
In our simulations the initial population in Step 1 was generated as a population
of mutations of the optimal solution of the DLP( , , ) problem. We also record
the value of the best fitness achieved during the simulations and the vector v the
best value was assign to. The latter is the solution of CCLP( , , ) problem found
by the evolutionary search - it is denoted by xES. The optimal solution for the
DLP( , , ) problem is denoted by xD .
To estimate the performance of the solution xES we compare it with the perfor-
mance with optimal deterministic solution xD in two frameworks: stochastic and
deterministic (ideal). To do this we propose the following performance indicators:
SIP (x ES )
the statistical performance rate: SPR =
SIP (x D )
the rate between the SIP(xES) and the utility of the optimal objective function
SIP (x ES )
value in deterministic case: SDR =
U ( f (x D ))
The values of the indicators obviously depend on the problem parameters, i.e.
, , ) and the dimensions of its elements. Thus we compute the va-
on the tuple (
lues of the indicators for various values of the parameters and compare its statistical
characteristics: maximum value, minimum value, median, mean value and standard
deviation. In order to obtain the statistical data we use the following simulation
procedure.
Step 1. Set the parameters n, k, NSIP, NG and K.
Step 2. Randomly generate the tuple ( , , ).
Step 3. Solve the DLP( , , ) problem by the simplex algorithm and obtain the
solution xD , the optimal value f(xD) and SIP(xD)
Step 4. Solve CCLP( , , ) problem by the ES-SS algorithm and obtain the solu-
tion xES and SIP(xES ).
Step 5. Compute and record the values of SPR and SDR for this setup.
Step 7. Repeat the second to fifth steps for K times, where K is a sufficiently large
number.
Step 8. Return: maximum values, minimum values, medians, mean values and stan-
dard deviations of SPR and SDR.
In our research we use the following values of the parameters: NSIP = 500, n = 5,
10,15 and k = 10, K = 50. In the ES-SS algorithm the distributions of mutations are
normal with constant standard deviation equal to 0.1. The values of the parameter
m are drawn from the set {n2,...,n+5}. The elements of the tuple ( , , ) are
44 A. Grzybowski
drawn from the interval [200,700]. In the SIP procedure the distributions of each
element of the matrix and both vectors are normal with mean equal zero and stan-
dard deviation being equal to 10% of the value of the element.
In the following tables we present the results of our simulations. The first one
shows the results obtained for the indicator SPR in case of n = 5,10,15.
We can see that the performance of the solution xES in comparison with the per-
formance of the deterministic optimal solution xD is very good. The expected utility
of xES is at least (see first column) three times greater than the expected utility of
xD . The ratio of the expected utilities may be even greater than 200, see the second
column of the Table 1. In average, the greater is the number of the decision variables
n, the greater is the ratio, see columns fourth and fifth.
Another question is how big is the expected utility of the mean value of the ob-
jective function when we use the solution xES in comparison with the utility of the
optimal value of the objective function in the ideal, deterministic case. The answer
is given by the values of the indicator SDR presented in Table 2.
We see that the average expected utility amounts to about 80% of the ideal opti-
mal value. Because the standard deviations are rather small it indicates a very good
performance of the solution xES . Obtained values of the index SDR show that any
other algorithms cannot gain much better performance in the stochastic framework.
The last question addressed in this paper is how many generation should be crea-
ted to obtain a satisfactory solution. Table 3 provides us with the statistical charac-
teristics of the number of a generation containing best element in our simulations.
The first two columns of Table 3 contain the minimal and maximal number of
generations needed to obtain the element with highest value of expected utility. We
see that a hundred of generation was always enough - 99 was the maximal value.
The median and mean of these numbers show that - in average - forty generations
is sufficient to obtain satisfactory solution for considered CCLP problems.
Final remarks
The solution for CCLP( , , ) problem found by the evolutionary search may
be called satisfactory rather than optimal. The optimality cannot be proved and we
even dont believe that it is optimal. But the solution is relatively easy to find and,
in considered stochastic framework, much better than the optimal solution found in
deterministic case. The improvement, measured in terms of average value of statis-
tical performance rate SPR amounts to about 7 (for n = 5) or even to about 60 (for
n = 15). The solution may be considered as satisfactory especially because of the
high values of the indicator SDR: its average values are about 80%. Taking into
account that the standard deviations of random variables disturbing all elements of
the tuple ( , , ) amounts to 10% of its original values, one should not expect
much more , even when applying more sophisticated solutions.
46 A. Grzybowski
References
[1] Charnes A., Cooper W.W., Chance-constrained programming, Management Sciences 1959, 6,
73-80.
[2] Taflanidis T.T., Beckan J.L., Efficient framework for optimal robust stochastic system design
using stochastic simulation, Comput. Methods Appl. Mech. Engrg. 2008, 198, 88-101.
[3] Steuer R., Na P., Multiple criteria decision making combined with finance: A categorized biblio-
graphic study, European Journal of Operational Research 2003, 150, 496-515.
[4] Dima I.C., Grzybowski A., Modele optymalizacyjne w problemach budetowania w warunkach
niepewnoci i/lub ryzyka, [in:] Budetowanie, I.C. Dima, J. Grabara (eds.), Prace Naukowe WZ
PCz, seria monografie Nr 19, Czstochowa 2009, 153-166.
[5] De P.K., Acharya D., Sahu K.C., A chance-constrained goal programming model for capital
budgeting, Journal of the Operational Research Society 1982, 33(7), 635-638.
[6] Norkin V.I., Pug G.C., Ruszczyski A., A Branch and Bound Method for Stochastic Global
Optimization, Mathematical Programming: Series A and B 1998, 83, 3, 425-450.
[7] Schuller G.I., Jensen H.A., Computational methods in optimization considering uncertainties -
An overview, Comput. Methods Appl. Mech. Engrg. 2008, 198, 2-13.
[8] Sen S., Stochastic Programming: Computational Issues and Challenges, [in:] Encyclopedia
of Operations Research and Management Science, 2nd edition, ed. S. Gass, C. Harris, Kluwer
Academic Publishers, Boston 2001, 821-827.
[9] Ruszczynski A., Shapiro A., Stochastic Programming, Handbooks in Operations Research and
Management Science, Vol. 10. Elsevier, Amsterdam 2003.
[10] Galar R., Soft selection in random global adaptation in Rn: A biocybernetic model of develop-
ment, Technical University Press, Wrocaw 1990 (in Polish).
[11] Sen S., Higle J.L., An Introductory Tutorial on Stochastic Linear Programming Models, Inter-
faces 1999, 29, 33-61.
[12] Obuchowicz A., Korbicz J., Global optimization via evolutionary search with soft selection,
Manuscript, available: http://www.mat.univie.ac.at/~neum/glopt/mss/gloptpapers.html