Abstract
We consider the problem of setting the hyperparameters of one or more surrogate-assisted evolutionary optimization algorithms, and similar methods such as Bayesian Optimization (i.e. Gaussian Process regression combined with an acquisition function for choosing next solutions to sample), which are often used for problems with expensive function evaluations. It has been remarked elsewhere that these algorithms can be started with an initial experimental design, or with a random sample, or by starting from only two points. We here investigate how to make such choices. By equating the use of random search for the initial population (or design) with running a single-point random searcher for some few initial samples (and extending this view to other initialization methods) it seems clear that these methods (initialization + Bayesian optimizer) are rather like an algorithm portfolio, and something can be learned from that literature. However, we start largely afresh with experiments that combine different sampling methods (random search, Latin hypercube) with Gaussian processes and different acquisition functions (expected improvement, and a generalisation of it). We consider a number of different experimental setups (functions and dimensions), and attempt to get a rough view of what works well where. Our work complements some previous Evolutionary Computation work on initialization using subrandom sequences, and experimental designs, but considers more modern algorithms for expensive problems.
Original language | English |
---|---|
Title of host publication | Data Science meets Optimization Workshop: CEC2017 & CPAIOR 2017 |
Subtitle of host publication | DSO 2017 |
Number of pages | 6 |
Publication status | Accepted/In press - 1 May 2017 |