On Benchmarking Interactive Evolutionary Multi-Objective Algorithms

Seyed Mahdi Shavarani, Manuel López-Ibáñez, Joshua Knowles

Research output: Contribution to journalArticlepeer-review

1 Downloads (Pure)

Abstract

We carry out a detailed performance assessment of two interactive evolutionary multi-objective algorithms (EMOAs) using a machine decision maker that enables us to repeat experiments and study specific behaviours modeled after human decision makers (DMs). Using the same set of benchmark test problems as in the original papers on these interactive EMOAs (in up to 10 objectives), we bring to light interesting effects when we use a machine DM based on sigmoidal utility functions that have support from the psychology literature (replacing the simpler utility functions used in the original papers). Our machine DM enables us to go further and simulate human biases and inconsistencies as well. Our results from this study, which is the most comprehensive assessment of multiple interactive EMOAs so far conducted, suggest that current well-known algorithms have shortcomings that need addressing. These results further demonstrate the value of improving the benchmarking of interactive EMOAs.
Original languageEnglish
JournalIEEE Transactions on Evolutionary Computation
DOIs
Publication statusPublished - 27 Jun 2023

Keywords

  • Design of Experiments
  • Interactive Evolutionary Multi-Objective Optimization
  • Machine Decision Maker
  • Performance assessment

Fingerprint

Dive into the research topics of 'On Benchmarking Interactive Evolutionary Multi-Objective Algorithms'. Together they form a unique fingerprint.

Cite this