10.25394/PGS.7864895.v1
Kyle Cooper
RETROSPECTIVE APPROXIMATION ALGORITHMS FOR MULTI-OBJECTIVE SIMULATION OPTIMIZATION ON INTEGER LATTICES
2019
Purdue University Graduate School
simulation
optimization
multi-objective
2019-06-10 17:28:18
article
https://hammer.figshare.com/articles/RETROSPECTIVE_APPROXIMATION_ALGORITHMS_FOR_MULTI-OBJECTIVE_SIMULATION_OPTIMIZATION_ON_INTEGER_LATTICES/7864895
We consider multi-objective simulation optimization (MOSO) problems, that is, nonlinear optimization problems in which multiple simultaneous objective functions can only be observed with stochastic error, e.g., as output from a Monte Carlo simulation model. In this context, the solution to a MOSO problem is the efficient set, which is the set of all feasible decision points for which no other feasible decision<br>point is at least as good on all objectives and strictly better on at least one objective. We are concerned primarily with MOSO problems on integer lattices, that is, MOSO<br><div>problems where the feasible set is a subset of an integer lattice. <br></div><div><br></div><div>In the first study, we propose the Retrospective Partitioned Epsilon-constraint with Relaxed Local Enumeration (R-PεRLE) algorithm to solve the bi-objective simulation optimization problem on integer lattices. R-PεRLE is designed for sampling efficiency. It uses a retrospective approximation (RA) framework to repeatedly call<br></div>the PεRLE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration.<br>The PεRLE sample-path solver is designed to solve the sample-path problem only to within a tolerance commensurate with the sampling error. It comprises a call to<br>each of the Pε and RLE algorithms, in sequence. First, Pε searches for new points to add to the sample-path local efficient set by solving multiple constrained single-<br>objective optimization problems. Pε places constraints to locate new sample-path local efficient points that are a function of the standard error away, in the objective space, from those already obtained. Then, the set of sample-path local efficient points found by Pε is sent to RLE, which is a local crawling algorithm that ensures the set is a sample-path approximate local efficient set. As the number of RA iterations increases, R-PεRLE provably converges to a local efficient set with probability one under appropriate regularity conditions. We also propose a naive, provably-convergent<br>benchmark algorithm for problems with two or more objectives, called R-MinRLE. R-MinRLE is identical to R-PεRLE except that it replaces the Pε algorithm with an<br>algorithm that updates one local minimum on each objective before invoking RLE. R-PεRLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. Our work points to a family of<br><div>RA algorithms for MOSO on integer lattices that employ RLE for certification of a sample-path approximate local efficient set, and for which the convergence guarantees are provided in this study.</div><div><br></div><div>In the second study, we present the PyMOSO software package for solving multi-objective simulation optimization problems on integer lattices, and for implementing<br></div>and testing new simulation optimization (SO) algorithms. First, for solving MOSO problems on integer lattices, PyMOSO implements R-PεRLE and R-MinRLE, which<br>are developed in the first study. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity<br>conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and<br>can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management,<br>implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics.<br>For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source<br>license.