Stochastic Optimization Algorithm based on Deterministic
Approximations
Mohan Krishnamoorthy
1 a
, Alexander Brodsky
2 b
and Daniel A. Menasc
´
e
2 c
1
Mathematics and Computer Science Division, Argonne National Laboratory, Lemont, IL 60439, U.S.A.
2
Department of Computer Science, George Mason University, Fairfax, VA 22030, U.S.A.
Keywords:
Decision Support, Decision Guidance, Deterministic Approximations, Stochastic Simulation Optimization,
Heuristic Algorithm.
Abstract:
We consider steady-state production processes that have feasibility constraints and metrics of cost and through-
put that are stochastic functions of process controls. We propose an efficient stochastic optimization algorithm
for the problem of finding process controls that minimize the expectation of cost while satisfying deterministic
feasibility constraints and stochastic steady state demand for the output product with a given high probability.
The proposed algorithm is based on (1) a series of deterministic approximations to produce a candidate set
of near-optimal control settings for the production process, and (2) stochastic simulations on the candidate
set using optimal simulation budget allocation methods. We demonstrate the proposed algorithm on a use
case of a real-world heat-sink production process that involves contract suppliers and manufacturers as well as
unit manufacturing processes of shearing, milling, drilling, and machining, and conduct an experimental study
that shows that the proposed algorithm significantly outperforms four popular simulation-based stochastic
optimization algorithms.
1 INTRODUCTION
This paper considers steady-state processes that pro-
duce a discrete product and have feasibility con-
straints and metrics of cost and throughput that are
stochastic functions of process controls. We are con-
cerned with the development of a one-stage stochas-
tic optimization algorithm for the problem of finding
process controls that minimize the expectation of cost
while satisfying deterministic feasibility constraints
and stochastic steady state demand for the output
product with a given high probability. These prob-
lems are prevalent in manufacturing processes, such
as machining, assembly lines, and supply chain man-
agement. The problem tries to find process controls
such as speed of the machines that not only satisfy the
feasibility constraints of being within a constant ca-
pacity, but also satisfy the production demand. In the
stochastic case, we want to find these process controls
at the minimum expected cost and for demand that is
satisfied with a high probability of 95%.
a
https://orcid.org/0000-0002-0828-4066
b
https://orcid.org/0000-0002-0312-2105
c
https://orcid.org/0000-0002-4085-6212
Stochastic optimization have typically been per-
formed using simulation-based optimization tech-
niques (see (Amaran et al., 2016) for a review of
such techniques). Tools like SIMULINK (Dabney
and Harman, 2001) and Modelica (Provan and Ven-
turini, 2012) allow users to run stochastic simula-
tions on models of complex systems in mechani-
cal, hydraulic, thermal, control, and electrical power.
Tools like OMOptim (Thieriota et al., 2011), Effi-
cient Traceable Model-Based Dynamic Optimization
(EDOp) (OpenModelica, 2009), and jMetal (Durillo
and Nebro, 2011) use simulation models to heuris-
tically guide a trial and error search for the optimal
answer. The above approaches are limited because
they use simulation as a black box instead of using
the problem’s underlying mathematical structure.
From the work on Mathematical Programming
(MP), we know that, for deterministic problems, uti-
lizing the mathematical structure can lead to sig-
nificantly better results in terms of optimality of
results and computational complexity compared to
simulation-based approaches (e.g., (Amaran et al.,
2016) and (Klemmt et al., 2009)). For this reason, a
number of approaches have been developed to bridge
the gap between stochastic simulation and MP. For
Krishnamoorthy, M., Brodsky, A. and Menascé, D.
Stochastic Optimization Algorithm based on Deterministic Approximations.
DOI: 10.5220/0010343802870294
In Proceedings of the 10th International Conference on Operations Research and Enterprise Systems (ICORES 2021), pages 287-294
ISBN: 978-989-758-485-5
Copyright
c
2021 by SCITEPRESS Science and Technology Publications, Lda. All rights reserved
287
instance, (Thompson and Davis, 1990) propose an in-
tegrated approach that combines simulation with MP
where the MP problem is constructed from the orig-
inal stochastic problem with uncertainties being re-
solved to their mean values by using a sample of
black-box simulations. This strategy of extracting
an MP from the original problem is also used by
(Paraskevopoulos et al., 1991) to solve the optimal ca-
pacity planning problem by incorporating the original
objective function augmented with a penalty on the
sensitivity of the objective function to various types of
uncertainty. The authors of (Xu et al., 2016) propose
an ordinal transformation framework, consisting of a
two-stage optimization framework that first extracts a
low fidelity model using simulation or a queuing net-
work model using assumptions that simplify the orig-
inal problem and then uses this model to reduce the
search space over which high fidelity simulations are
run to find the optimal solution to the original prob-
lem. However, extraction of the mathematical struc-
ture through sampling using a black-box simulation is
computationally expensive, especially for real-world
production processes composed of complex process
networks.
In (Krishnamoorthy et al., 2015), instead of ex-
tracting the mathematical structure using black-box
simulation, the mathematical structure is extracted
from a white-box simulation code analysis as part of
a heuristic algorithm to solve a stochastic optimiza-
tion problem of finding controls for temporal produc-
tion processes with inventories as to minimize the to-
tal cost while satisfying the stochastic demand with
a predefined probability. Similar to the previous ap-
proaches, the mathematical structure is used for ap-
proximating a candidate set of solutions by solving a
series of deterministic MP problems that approximate
the stochastic simulation. However, the class of prob-
lems considered in (Krishnamoorthy et al., 2015) is
limited to processes described using piece-wise linear
arithmetic. In (Krishnamoorthy et al., 2018), the ap-
proach from (Krishnamoorthy et al., 2015) is gener-
alized to solve stochastic optimization problems that
may involve non-linear arithmetic with stochastic ob-
jective and multiple demand constraints, like those in
temporal production processes. However, many pro-
duction processes, especially in manufacturing, have
models that are in steady-state and have only a sin-
gle demand constraint that can be solved more easily.
To close this gap, this paper specializes the approach
from (Krishnamoorthy et al., 2018) for a single de-
mand constraint to solve the stochastic optimization
problems for steady-state production processes de-
scribed using non-linear arithmetic.
More specifically, the contributions of this paper
are three-fold: (A) a specialized heuristic algorithm
called Stochastic Optimization Algorithm Based on
Deterministic Approximations (SODA) to solve the
problem of finding production process controls that
minimize the expectation of cost while satisfying
the deterministic process feasibility constraints and
stochastic steady state demand for the output prod-
uct with a given high probability. The proposed
algorithm is based on (1) a series of deterministic
approximations to produce a candidate set of near-
optimal control settings for the production process,
and (2) stochastic simulations on the candidate set
using optimal simulation budget allocation methods
(e.g., (Chen et al., 2000), (Chen and Lee, 2011),
(Lee et al., 2012)). (B) a demonstration of the pro-
posed algorithm on a use case of a real-world heat-
sink production process that involves 10 processes in-
cluding contract suppliers and manufacturers as well
as unit manufacturing processes of shearing, milling,
drilling, and machining with models from the litera-
ture that use non-linear physics-based equations. (C)
an initial experimental study using the heat-sink pro-
duction process to compare the proposed algorithm
with four popular simulation-based stochastic opti-
mization algorithms viz., Nondominated Sorting Ge-
netic Algorithm 2 (NGSA2) (Deb et al., 2002), Indi-
cator Based Evolutionary Algorithm (IBEA) (Zitzler
and K
¨
unzli, 2004), Strength Pareto Evolutionary Al-
gorithm 2 (SPEA2) (Zitzler et al., 2001), and Speed-
constrained Multi-objective Particle swarm optimiza-
tion (SMPSO) (Nebro et al., 2009). The experimental
study demonstrates that SODA significantly outper-
forms the other algorithms in terms of optimality of
results and computation time. In particular, running
over a 12-process problem using a 8-core server with
16GB RAM, in 40 minutes, SODA achieves a pro-
duction cost lower than that of competing algorithms
by 61%; in 16 hours SODA achieves 29% better cost;
and, in 3 days it achieves 7% better cost.
The rest of this paper is organized as follows.
Section 2 formally describes the stochastic optimiza-
tion problem over steady-state production processes.
SODA, including deterministic approximations, is
presented in section 3. Experimental results are pre-
sented in section 4. Finally, section 5 concludes with
some future research directions.
ICORES 2021 - 10th International Conference on Operations Research and Enterprise Systems
288
2 OPTIMIZATION OF
PRODUCTION PROCESSES
WITH SCFA
We now specialize the stochastic optimization prob-
lem from (Krishnamoorthy et al., 2018) for steady-
state processes with a single feasibility constraint over
a stochastic metric. The stochastic optimization prob-
lem for such processes assumes a stochastic closed-
form arithmetic (SCFA) simulation of the following
form. A SCFA simulation on input variable
~
X is a se-
quence y
1
= expr
1
,... ,y
n
= expr
n
where expr
i
,1 i n is either
(a) An arithmetic or boolean expression in terms of
a subset of the elements of
~
X and/or y
1
,. .. ,y
i1
.
We say that y
i
is arithmetic or boolean if the expr
i
is arithmetic or boolean correspondingly.
(b) An expression invoking PD(
~
P), a function that
draws from a probability distribution using pa-
rameters
~
P that are a subset of the elements of
~
X
and/or y
1
,. .. ,y
i1
.
We say that y
i
,1 i n is stochastic if, recursively,
(a) expr
i
invokes PD(
~
P), or
(b) expr
i
uses at least one stochastic variable y
j
,1
j < i
If y
i
is not stochastic, we say that it is deterministic.
Also, we say that a SCFA simulation S computes a
variable v if v = y
i
, for some 1 i n.
This paper considers the stochastic optimization
problem of finding process controls that minimize the
cost expectation while satisfying deterministic pro-
cess constraints and steady state demand for the out-
put product with a given probability. More formally,
the stochastic optimization problem is of the form:
minimize
~
X
~
D
E(cost(
~
X))
subject to C(
~
X)
P(thru(
~
X) θ) α
(1)
where
~
D = D
1
× ··· × D
n
is the domain for deci-
sion variables
~
X,
~
X is a vector of decision variables
over
~
D, cost(
~
X) is a random variable defined in terms
of
~
X, thru(
~
X) is a random variable defined in terms
of
~
X, C(
~
X) is a deterministic constraint in
~
X i.e., a
function C :
~
D {true, f alse}, θ R is a through-
put threshold, α [0,1] is a probability threshold,
and P(thru(
~
X) θ) is the probability that thru(
~
X) is
greater than or equal to θ
Note in this problem that upon increasing θ to
some θ
0
, the size of the space of alternatives that sat-
isfy the stochastic demand constraint in (1) increases
and hence it can be said that the best solution, i.e.,
the minimum expected cost is monotonically improv-
ing in θ
0
. We assume that the random variables,
cost(
~
X) and thru(
~
X) as well as the deterministic con-
straint C(
~
X) are expressed by an SCFA simulation S
that computes the stochastic arithmetic variable (cost,
thru)
~
R
2
as well as the deterministic boolean vari-
able C {true, f alse}. Many complex real-world
processes can be formulated as SCFA simulations as
described in (Krishnamoorthy et al., 2017).
3 STOCHASTIC OPTIMIZATION
ALGORITHM BASED ON
DETERMINISTIC
APPROXIMATIONS
This section presents the Stochastic Optimization
Algorithm Based on Deterministic Approximations
(SODA). The problem of optimizing stochastic pro-
duction processes can be solved using simulation-
based optimization approaches by initializing the con-
trol settings and performing simulations to check
whether the throughput satisfies the demand with suf-
ficient probability. But that approach is inefficient be-
cause the stochastic space of this problem is very large
and hence this approach will typically converge very
slowly to the optimum solution. So, the key idea of
SODA is that instead of working with a large number
of choices in the stochastic space, we use determin-
istic approximations to generate a small set of candi-
date control settings and then validate these control
settings in the stochastic space using simulations.
An overview of SODA is shown in Fig. 1. To gen-
erate a small set of candidate control settings, SODA
performs deterministic approximations of the original
stochastic problem by defining a deterministic com-
putation S
0
from the SCFA simulation S described
in section 2 by replacing every expression that uses
a probability distribution PD(
~
P) with the expectation
of that distribution. The deterministic approximations
cost
0
(
~
X) and thru
0
(
~
X) of cost(
~
X) and thru(
~
X), re-
spectively, can be expressed using S
0
. To optimize
this reduced problem, a deterministic optimization
problem (see (2)) approximates the stochastic opti-
mization problem of (1) is used as a heuristic.
minimize
~
X
~
D
cost
0
(
~
X)
subject to C(
~
X)
thru
0
(
~
X) θ
0
(2)
where θ
0
θ is a conservative approximation of θ.
Stochastic Optimization Algorithm based on Deterministic Approximations
289
Generate candidates with deterministic optimization
Run stochastic simulation
Heuristically
(exponentially)
inflate the demand
bound as a function
of the demand
satisfaction difference
Calculate exp. cost and demand
satisfaction probabilities
Is avg. cost min
until now?
Store run
Store candidate as
the best so far
Is
likelihood
of demand
sat. suff.
high?
yes
can't say
Calculate discrete demand between when likelihood of
demand sat. was suff. high and when it was not / can't say
Store run
no
For all discrete demands
Generate candidates with deterministic optimization
Run stochastic simulation
Calculate exp. cost and demand
satisfaction probabilities
Is avg. cost min
until now?
Store run
Store candidate as
the best so far
Is
likelihood
of demand
sat. suff.
high?
yes
can't say
Store run
Inflate
Deflate
For all Stored Runs
Run stochastic simulation for the
number of simulations allocated
Calculate exp. cost and demand
satisfaction probabilities
Is avg. cost min
until now?
Store candidate as
the best so far
Is
likelihood
of demand
sat. suff.
high?
yes
no
Remove candidate
from candidate set
Refine Candidates using OCBA-CO
Simulation budget allocation
based on OCBA-CO
Figure 1: Overview of SODA.
This deterministic approximation is performed it-
eratively such that the control settings found in the
current iteration are more likely to generate through-
puts that satisfy demand with the desired probability
than in the previous iterations. This is possible be-
cause of the inflate phase (left box of Fig. 1) and the
deflate phase (middle box of Fig. 1) of SODA.
Intuitively, the inflate phase tries to increase the
throughput to satisfy the demand with the desired
probability. When the current candidate control set-
tings do not generate the throughput that satisfies the
original user-defined demand with a desired probabil-
ity, the demand parameter itself is exponentially in-
flated such that this inflation does not render future
iterations infeasible. This is ensured by precomput-
ing the feasible throughput range, which is the inter-
val between the requested demand and the through-
put bound where the production process is at capac-
ity. This bound is computed by solving the optimiza-
tion problem in (3). Then, the demand parameter is
inflated within the throughput range. This inflation
of the demand parameter yields higher controls for
the machines and thus increases the overall through-
put. However, this may result in the throughput over-
shooting the demand in the stochastic setting, which
degrades the objective cost.
maximize
~
X
~
D
thru(
~
X)
subject to C(
~
X)
(3)
To overcome this, in the deflate phase, SODA
scales back the demand by splitting it into a number
of points, separated by a small epsilon, in the interval
between the inflated demand where throughput over-
shot the original demand and the previous demand at
which the last inflation occurred. Deterministic ap-
proximations are run for this lower demand to check
whether the throughput still satisfies the demand with
desired probability while yielding a better objective
cost. In this way, the inflate and deflate phases find a
better demand threshold for which it can get the right
balance between optimum cost and demand satisfac-
tion with desired probability.
After the iterative inflate and deflate procedure,
more simulations may be needed to check if a promis-
ing candidate that currently is not a top candidate
could be the optimal solution or to choose an optimal
solution from multiple candidates. To resolve this,
these candidates are further refined in the refineCan-
didates phase (right box in Fig. 1). In this phase,
SODA refines the candidates in the candidate set gen-
erated using deterministic approximation by running
Monte Carlo simulations on them. To maximize the
likelihood of selecting the best candidate, i.e., can-
didate with the least cost and sufficient probability
of demand satisfaction, this phase uses the Optimal
Computing Budget Allocation method for Constraint
Optimization (OCBA-CO) (Lee et al., 2012) for the
ranking and selection problem with stochastic con-
straints. OCBA-CO allocates budget among the can-
didates so that the greatest fraction of the budget is
allocated to the most critical candidates in the candi-
date set. This apportions the computational budget in
a way that minimizes the effort of simulating candi-
dates that are not critical or those that have low vari-
ability. This phase is performed iteratively for some
delta budget that is allocated among the candidates
using OCBA-CO and, in each iteration, the additional
simulations on the candidates can yield a better ob-
jective cost or a higher confidence of demand satis-
faction. In each subsequent iteration, OCBA-CO uses
the updated estimates to find new promising candi-
dates and allocates a greater fraction of the delta bud-
get to them to check if these candidates can be further
refined.
In this way, SODA uses the model knowledge in
inflate, deflate, and refineCandidates phases to pro-
vide optimal control settings of the non-linear pro-
cesses that a process operator can use on the manu-
facturing floor in a stochastic environment.
ICORES 2021 - 10th International Conference on Operations Research and Enterprise Systems
290
4 EXPERIMENTAL RESULTS
This section discusses experimental results that evalu-
ate SODA by comparing the quality of objective cost
and rate of convergence obtained from SODA with
other metaheuristic simulation-based optimization al-
gorithms. We used the SCFA simulation of the Heat-
Sink Production Process (HSPP) described in (Kr-
ishnamoorthy et al., 2017) in the experiments. The
SCFA simulation of HSPP is written in JSONiq and
SODA is written in Java. The deterministic approxi-
mations for SODA are performed using a system that
automatically converts the SCFA simulation of HSPP
into a deterministic optimization problem, which is a
deterministic abstraction of the original SCFA simu-
lation.
For the comparison algorithms, we used the
jMetal package (Durillo and Nebro, 2011). The al-
gorithms chosen for comparison include Nondomi-
nated Sorting Genetic Algorithm 2 (NGSA2) ‘(Deb
et al., 2002), Indicator Based Evolutionary Algo-
rithm (IBEA) ‘(Zitzler and K
¨
unzli, 2004), Strength
Pareto Evolutionary Algorithm 2 (SPEA2) ‘(Zitzler
et al., 2001), and Speed-constrained Multi-objective
Particle swarm optimization (SMPSO) ‘(Nebro et al.,
2009). These are popular multi-objective simulation-
based optimization algorithms and they were cho-
sen because they performed better than other single
and multi-objective simulation-based optimization al-
gorithms available in jMetal. The swarm/population
size for the selected algorithms was set to 100. Also,
we ran these algorithms with the following operators
(wherever applicable): (a) polynomial mutation op-
erator with probability of (noOfDecisionVariables)
1
and distribution index of 20; (b) simulated binary
crossover operator with probability of 0.9 and distri-
bution index of 20; and (c) binary tournament selec-
tion operator. For each algorithm, the maximum num-
ber of evaluations was set to 500,000 and the maxi-
mum time to complete these evaluations was set to the
time that SODA ran for. All these algorithms use the
SCFA of HSPP to perform the cost, throughput and
feasibility constraint computations. The computation
of the demand satisfaction information from expected
throughput is also similar to that of SODA. The cost
and demand satisfaction information is then used by
the jMetal algorithms to further increase or decrease
the control settings using heuristics. The source code
for SCFA simulation of HSPP, SODA algorithm and
the comparison using the jMetal package is available
at (Krishnamoorthy et al., 2020)
Initially, SODA was run with two different set-
tings. In the first setting, SODA was run for 16 hours
and the number of candidates collected from the In-
Figure 2: Estimated average cost for the elapsed time of 16
hours.
flateDeflate phase was 2,000. In the second setting,
SODA was run for approximately 72 hours (3 days)
and the number of candidates collected was 2,000.
The inflateDeflate was run for a certain amount of
time and to accumulate the required candidates within
this time, the bound on the deterministic constraints
over each production process in HSPP was increased
so that the throughput range was wide. Additionally,
all production processes were stochastic due to 5%
of standard normal noise added to the controls. Fi-
nally, the demand from the HSPP was set to be 4. The
comparison algorithms were also run with the same
(relevant) parameters.
Figure 3: Estimated average cost for the elapsed time of 3
days.
The data collected from the experiments include
the estimated average costs achieved at different
elapsed time points for SODA in two settings and all
the comparison algorithms. Figure 2 shows the esti-
mated average costs achieved after 16 hours whereas
Fig. 3 shows the estimated average costs achieved
after about three days. Each experiment was run mul-
tiple times and 95% confidence bars are included at
certain elapsed time points in both figures.
It can be seen that both settings of SODA per-
form better than the comparison algorithms initially.
This is because SODA uses deterministic approxima-
tion to reduce the search space of potential candidates
quickly whereas the competing algorithms start at a
Stochastic Optimization Algorithm based on Deterministic Approximations
291
random point in the search space and need a number
of additional iterations (and time) to find candidates
closer to those found by SODA.
As time progresses, SODA in both settings is able
to achieve much better expected cost in the inflat-
eDeflate phase than the other algorithms. Hence,
SODA converges quicker toward the points close to
the (near) optimal solution found at the end of the al-
gorithm. This is because SODA uses heuristics in the
inflate and deflate phases that reduce the search space
quickly, which allows SODA to quickly converge to-
ward a more promising candidate.
Also, the solutions found by SODA at the end
of the experiment are much better than those found
by the competing algorithms. After 16 hours, the
expected cost found by SODA was 29% better than
the nearest comparison algorithm (SMPSO) and 49%
better than the second best comparison algorithm
(NSGA2) (see Fig. 2). After three days though, the
advantage of SODA over the nearest comparison al-
gorithm (SMPSO) reduces to 7% and that to the sec-
ond best algorithm (NSGA2) reduces to 17% (see Fig.
3). This is not surprising since SMPSO and NSGA2
use strong meta-heuristics and given enough time
such algorithms will eventually converge toward the
(near) optimal solution found by SODA. Although,
it should be noted from Fig. 3 that SODA reaches
close to the optimal solution found at the end much
more quickly than its competition. Also, since the
x-axis in Fig. 3 is in log scale, it should be noted
that out of the total experiment time of about 259,200
sec, the other algorithms stop improving at around
180,911 sec whereas SODA continues to improve
for a longer time (until about 210,562 sec) due to a
good collection of candidates from the InflateDeflate
phase and performing simulation refinements using
an optimal budget allocation scheme of OCBA-CO
in RefineCandidates phase. We also ran the Tukey-
Kramer procedure for pairwise comparisons of corre-
lated means on all six algorithm types. By doing so,
we confirmed that SODA was indeed better than the
other algorithms when the experiment ended.
We believe that SODA is sensitive to three cat-
egories of parameters of the production process viz.
(1) the throughput range as determined by the inter-
val between the requested demand and the throughput
bound where the production process is at capacity, (2)
the level of noise added to the controls of each pro-
duction process, and (3) the coefficient used to inflate
the demand parameter, where the higher value cor-
responds to inflating more aggressively. To evaluate
SODA against the comparison algorithms for these
categories, we ran the experiment for different set-
tings of each of these parameters. For the through-
Figure 4: Estimated average cost for the elapsed time of
3 days for a wide throughput range where the noise level
and inflation jump is A: high (10%) and very aggressive
(1.3x); B: high (10%) and aggressive (1x); C: high (10%)
and less aggressive (0.8x); D: low (5%) and very aggressive
(1.3x); E: low (5%) and aggressive (1x); F: low (5%) and
less aggressive (0.8x).
put range, we chose a wide range where the capacity
of each production process of HSPP was high and a
narrow range where the capacity was lower. For the
level of noise, we chose 5% of standard normal noise
as before as well as a higher noise of 10% of standard
normal noise. Finally, the inflation jump was (a) more
aggressive when a larger fraction of multiplicative ex-
ponential factor was added to the current demand, e.g,
1.3x, (b) aggressive when a smaller fraction of this
factor was added to the current demand, e.g, 1x, and
(c) less aggressive when an even smaller fraction of
this factor was added to the current demand, e.g, 0.8x.
To understand the effects of these parameters com-
prehensively, SODA and the comparison algorithms
were run for 72 hours (3 days). The time for the in-
flateDeflate phase and the number of candidates col-
lected during this phase was maintained the same as
before. Also, the demand of HSPP was maintained at
4 so that we could perform a direct comparison of the
estimated average costs over the elapsed time across
these experiments.
Figures 4 and 5 show the estimated average costs
achieved after three days when the throughput range
was wide and when the throughput range was narrow,
respectively. To increase or decrease the throughput
range, the capacity of the production process was in-
creased or decreased such that the throughput bound
obtained by solving the optimization model in equa-
tion 3 would change accordingly. In Figs. 4 and 5
the first row shows results for the case when the noise
level is 10% of the standard normal noise and the sec-
ond row shows results for the case where the noise
level is 5% of that noise. The first, second and third
column of both these figures show results for the case
when the level of inflation jump was more aggressive,
aggressive and less aggressive, respectively. Since
ICORES 2021 - 10th International Conference on Operations Research and Enterprise Systems
292
Figure 5: Estimated average cost for the elapsed time of 3
days for a narrow throughput range where the noise level
and inflation jump is A: high (10%) and very aggressive
(1.3x); B: high (10%) and aggressive (1x); C: high (10%)
and less aggressive (0.8x); D: low (5%) and very aggressive
(1.3x); E: low (5%) and aggressive (1x); F: low (5%) and
less aggressive (0.8x).
the inflation jump occurs in the inflateDeflate phase
of SODA, the results from the comparison algorithms
remained the same in any particular row of Figs. 4
and 5.
Table 1 shows the percentage difference of the es-
timated average costs between SODA and the most
competitive comparison algorithm, SMPSO, at the
end of the experiment. When the throughput range is
wide and the inflation jump is more aggressive (see
Figs. 4A and 4D), SODA decreases the cost more
rapidly than when the inflation jump is less aggres-
sive (see Figs. 4C and 4F). This is because SODA is
able to quickly find candidates that satisfy the demand
with sufficient probability because of the more ag-
gressive inflation jump. The initial candidates found
by the more aggressive case (not shown here) have
high expected cost due to the throughput range be-
ing wide. Additionally, SODA is affected by greater
amount of noise since greater inflation is required to
satisfy the demand with sufficient probability. This
yields worse candidates collected during the inflat-
eDeflate phase when the inflation jump is less aggres-
sive and the noise is high (see Fig. 4C). It seems that
when the throughput range is wide, the most effec-
tive strategy for inflation jump is for it to be neither
less nor more aggressive because the cost decreases
rapidly in the inflateDeflate phase and the candidates
collected in this phase give the best gain in the ex-
pected cost over the comparison algorithms at the end
of the experiment (see Figs. 4B and 4E).
When the throughput range is narrow and noise
is high, we can see that the more aggressive case
shown in Fig. 5A performs very poorly. A possi-
ble explanation is that the inflated demand keeps hit-
ting the throughput boundary due to more aggressive
inflation. This either yields higher expected cost or
for the interval explored by deflate not to be wide
enough for the expected cost to improve. This phe-
nomenon improves when the level of aggression is
reduced as shown in Fig. 5B and the candidates col-
lected in the inflateDeflate phase are the best for this
scenario when the inflation level is less aggressive as
shown in Fig. 5C. When the noise level is reduced, the
candidates collected in the inflateDeflate phase with
more aggressive inflation jump (see Fig. 5D) are bet-
ter as compared to when the noise level is higher (see
Fig. 5A). This happens because more candidates sat-
isfy the demand with sufficient probability within the
narrow throughput range when the noise level is low.
Hence, when the throughput range is narrow, it seems
that the most effective strategy for inflation level is
for it to be between normal and less aggressive since
SODA achieves the best gain over the comparison al-
gorithms at this inflation level. This is because SODA
explores the narrow throughput range better for fea-
sible candidates and finds better optimal candidates
among them at this inflation level.
Table 1: Percentage difference of the estimated average cost
between SODA and the most competitive comparison algo-
rithm SMPSO found at the end of the experiment where
positive percentages mean SODA did better and negative
percentages mean SMPSO did better.
Inflation jump level
Wide throughput
range and
10% noise
Wide throughput
range and
5% noise
Narrow throughput
range and
10% noise
Narrow throughput
range and
5% noise
Very aggressive (1.3x) -16 2 -23 1
Aggressive (1x) 9 7 2 8
Less aggressive (0.8x) 5 2 -2 7
5 CONCLUSIONS
This paper presented an efficient stochastic optimiza-
tion algorithm called SODA for the problem of find-
ing process controls of a steady-state production pro-
cesses that minimize the expectation of cost while
satisfying the deterministic feasibility constraints and
stochastic steady state demand for the output product
with a given high probability. SODA is based on per-
forming (1) a series of deterministic approximations
to produce a candidate set of near-optimal control set-
tings for the production process, and (2) stochastic
simulations on the candidate set using optimal sim-
ulation budget allocation methods.
This paper also demonstrated SODA on a use case
of a real-world heat-sink production process that in-
volves contract suppliers and manufacturers as well
as unit manufacturing processes of shearing, milling,
drilling, and machining. Finally, an experimental
study showed that SODA significantly outperforms
four popular simulation-based stochastic optimiza-
tion algorithms. In particular, the study showed that
SODA performs better than the best competing algo-
Stochastic Optimization Algorithm based on Deterministic Approximations
293
rithm by 29% after 16 hours and by 7% after three
days. The study also showed that SODA can solve
the optimization problem over a complex network of
production processes and SODA is able to scale well
for such a network. However, the efficiency of SODA
is limited by the total budget allocated to OCBA-CO,
which is an input parameter to the algorithm.
Future research directions include: (a) dynami-
cally executing the inflate deflate phase and candidate
refinement phase of SODA to improve the exploration
of the search space; and (b) comparing SODA with an
existing Stochastic Optimization Algorithm Based on
Deterministic Approximations.
ACKNOWLEDGEMENTS
The authors are partly supported by the National Insti-
tute of Standards and Technology Cooperative Agree-
ment 70NANB12H277.
REFERENCES
Amaran, S., Sahinidis, N. V., Sharda, B., and Bury, S. J.
(2016). Simulation optimization: a review of algo-
rithms and applications. Annals of Operations Re-
search, 240(1):351–380.
Chen, C. H. and Lee, L. H. (2011). Stochastic Simulation
Optimization: An Optimal Computing Budget Alloca-
tion. World Scientific Publishing Company, Hacken-
sack, NJ, USA.
Chen, C.-H., Lin, J., Y
¨
ucesan, E., and Chick, S. E. (2000).
Simulation budget allocation for further enhancing the
efficiency of ordinal optimization. Discrete Event Dy-
namic Systems, 10(3):251–270.
Dabney, J. B. and Harman, T. L. (2001). Mastering
SIMULINK 4. Prentice Hall, Upper Saddle River, NJ,
USA, 1st edition.
Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T. (2002).
A fast and elitist multiobjective genetic algorithm:
NSGA-II. IEEE Transactions on Evolutionary Com-
putation, 6(2):182–197.
Durillo, J. J. and Nebro, A. J. (2011). jMetal: A Java frame-
work for multi-objective optimization. Advances in
Engineering Software, 42(10):760 – 771.
Klemmt, A., Horn, S., Weigert, G., and Wolter, K.-J.
(2009). Simulation-based optimization vs. mathemat-
ical programming: A hybrid approach for optimiz-
ing scheduling problems. Robotics and Computer-
Integrated Manufacturing, 25(6):917–925.
Krishnamoorthy, M., Brodsky, A., and Menasc, D. A.
(2018). Stochastic decision optimisation based on de-
terministic approximations of processes described as
closed-form arithmetic simulation. Journal of Deci-
sion Systems, 27(sup1):227–235.
Krishnamoorthy, M., Brodsky, A., and Menasc
´
e, D. (2015).
Optimizing stochastic temporal manufacturing pro-
cesses with inventories: An efficient heuristic algo-
rithm based on deterministic approximations. In Pro-
ceedings of the 14th INFORMS Computing Society
Conference, pages 30–46.
Krishnamoorthy, M., Brodsky, A., and Menasc
´
e, D. A.
(2017). Stochastic optimization for steady state pro-
duction processes based on deterministic approxima-
tions. Technical Report GMU-CS-TR-2017-3, 4400
University Drive MSN 4A5, Fairfax, VA 22030-4444
USA.
Krishnamoorthy, M., Brodsky, A., and Menasc
´
e, D. A.
(2020). Source code for SODA algorithm.
Lee, L. H., Pujowidianto, N. A., Li, L. W., Chen, C. H., and
Yap, C. M. (2012). Approximate simulation budget
allocation for selecting the best design in the presence
of stochastic constraints. IEEE Transactions on Auto-
matic Control, 57(11):2940–2945.
Nebro, A., Durillo, J., Garc
´
ıa-Nieto, J., Coello Coello,
C., Luna, F., and Alba, E. (2009). Smpso: A new
pso-based metaheuristic for multi-objective optimiza-
tion. In 2009 IEEE Symposium on Computational In-
telligence in Multicriteria Decision-Making (MCDM
2009), pages 66–73. IEEE Press.
OpenModelica (2009). Efficient Traceable Model-
Based Dynamic Optimization - EDOp, https://
openmodelica.org/research/omoptim/edop. Open-
Modelica.
Paraskevopoulos, D., Karakitsos, E., and Rustem, B.
(1991). Robust Capacity Planning under Uncertainty.
Management Science, 37(7):787–800.
Provan, G. and Venturini, A. (2012). Stochastic simula-
tion and inference using modelica. In Proceedings
of the 9th International Modelica Conference, pages
829–837.
Thieriota, H., Nemera, M., Torabzadeh-Tarib, M., Fritzson,
P., Singh, R., and Kocherry, J. J. (2011). Towards
design optimization with openmodelica emphasizing
parameter optimization with genetic algorithms. In
Modellica Conference. Modelica Association.
Thompson, S. D. and Davis, W. J. (1990). An integrated ap-
proach for modeling uncertainty in aggregate produc-
tion planning. IEEE Transactions on Systems, Man,
and Cybernetics, 20(5):1000–1012.
Xu, J., Zhang, S., Huang, E., Chen, C., Lee, L. H., and
Celik, N. (2016). MO
2
TOS: Multi-fidelity optimiza-
tion with ordinal transformation and optimal sam-
pling. Asia-Pacific Journal of Operational Research,
33(03):1650017.
Zitzler, E. and K
¨
unzli, S. (2004). Indicator-based selection
in multiobjective search. In Proceedings of the 8th
International Conference on Parallel Problem Solving
from Nature, 2004, pages 832–842. Springer.
Zitzler, E., Laumanns, M., and Thiele, L. (2001). SPEA2:
Improving the strength pareto evolutionary algorithm.
Technical Report 103, Computer Engineering and
Networks Laboratory (TIK), Swiss Federal Institute
of Technology (ETH), Zurich, Switzerland.
ICORES 2021 - 10th International Conference on Operations Research and Enterprise Systems
294