## RETROSPECTIVE APPROXIMATION ALGORITHMS FOR MULTI-OBJECTIVE SIMULATION OPTIMIZATION ON INTEGER LATTICES

thesis

posted on 10.06.2019 by Kyle Cooper#### thesis

In order to distinguish essays and pre-prints from academic theses, we have a separate category. These are often much longer text based documents than a paper.

We consider multi-objective simulation optimization (MOSO) problems, that is, nonlinear optimization problems in which multiple simultaneous objective functions can only be observed with stochastic error, e.g., as output from a Monte Carlo simulation model. In this context, the solution to a MOSO problem is the efficient set, which is the set of all feasible decision points for which no other feasible decision

point is at least as good on all objectives and strictly better on at least one objective. We are concerned primarily with MOSO problems on integer lattices, that is, MOSO

The PεRLE sample-path solver is designed to solve the sample-path problem only to within a tolerance commensurate with the sampling error. It comprises a call to

each of the Pε and RLE algorithms, in sequence. First, Pε searches for new points to add to the sample-path local efficient set by solving multiple constrained single-

objective optimization problems. Pε places constraints to locate new sample-path local efficient points that are a function of the standard error away, in the objective space, from those already obtained. Then, the set of sample-path local efficient points found by Pε is sent to RLE, which is a local crawling algorithm that ensures the set is a sample-path approximate local efficient set. As the number of RA iterations increases, R-PεRLE provably converges to a local efficient set with probability one under appropriate regularity conditions. We also propose a naive, provably-convergent

benchmark algorithm for problems with two or more objectives, called R-MinRLE. R-MinRLE is identical to R-PεRLE except that it replaces the Pε algorithm with an

algorithm that updates one local minimum on each objective before invoking RLE. R-PεRLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. Our work points to a family of

are developed in the first study. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity

conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and

can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management,

implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics.

For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source

license.

point is at least as good on all objectives and strictly better on at least one objective. We are concerned primarily with MOSO problems on integer lattices, that is, MOSO

problems where the feasible set is a subset of an integer lattice.

In the first study, we propose the Retrospective Partitioned Epsilon-constraint with Relaxed Local Enumeration (R-PεRLE) algorithm to solve the bi-objective simulation optimization problem on integer lattices. R-PεRLE is designed for sampling efficiency. It uses a retrospective approximation (RA) framework to repeatedly call

the PεRLE sample-path solver at a sequence of increasing sample sizes, using the solution from the previous RA iteration as a warm start for the current RA iteration.The PεRLE sample-path solver is designed to solve the sample-path problem only to within a tolerance commensurate with the sampling error. It comprises a call to

each of the Pε and RLE algorithms, in sequence. First, Pε searches for new points to add to the sample-path local efficient set by solving multiple constrained single-

objective optimization problems. Pε places constraints to locate new sample-path local efficient points that are a function of the standard error away, in the objective space, from those already obtained. Then, the set of sample-path local efficient points found by Pε is sent to RLE, which is a local crawling algorithm that ensures the set is a sample-path approximate local efficient set. As the number of RA iterations increases, R-PεRLE provably converges to a local efficient set with probability one under appropriate regularity conditions. We also propose a naive, provably-convergent

benchmark algorithm for problems with two or more objectives, called R-MinRLE. R-MinRLE is identical to R-PεRLE except that it replaces the Pε algorithm with an

algorithm that updates one local minimum on each objective before invoking RLE. R-PεRLE performs favorably relative to R-MinRLE and the current state of the art, MO-COMPASS, in our numerical experiments. Our work points to a family of

RA algorithms for MOSO on integer lattices that employ RLE for certification of a sample-path approximate local efficient set, and for which the convergence guarantees are provided in this study.

In the second study, we present the PyMOSO software package for solving multi-objective simulation optimization problems on integer lattices, and for implementing

and testing new simulation optimization (SO) algorithms. First, for solving MOSO problems on integer lattices, PyMOSO implements R-PεRLE and R-MinRLE, whichare developed in the first study. Both algorithms employ pseudo-gradients, are designed for sampling efficiency, and return solutions that, under appropriate regularity

conditions, provably converge to a local efficient set with probability one as the simulation budget increases. PyMOSO can interface with existing simulation software and

can obtain simulation replications in parallel. Second, for implementing and testing new SO algorithms, PyMOSO includes pseudo-random number stream management,

implements algorithm testing with independent pseudo-random number streams run in parallel, and computes the performance of algorithms with user-defined metrics.

For convenience, we also include an implementation of R-SPLINE for problems with one objective. The PyMOSO source code is available under a permissive open source

license.