Core Concepts
Evolutionary Preference Sampling (EPS) strategy can efficiently sample preference vectors to accelerate the convergence of Pareto set learning models.
Abstract
The paper proposes an Evolutionary Preference Sampling (EPS) strategy to enhance the efficiency of Pareto set learning. The key insights are:
Preference vector sampling has a significant impact on the convergence speed of Pareto set learning models. Uniform sampling may not be effective for complex Pareto front shapes.
EPS considers preference sampling as an evolutionary process. It starts with uniform sampling to collect preference vectors and their corresponding objective values. Then, it selects a subset of high-performing preference vectors as the initial population. This population undergoes crossover and mutation to generate preference vectors for the next training period.
The evolutionary process of preference vector generation is repeated, with the population being updated in each period. This allows the preference vectors to continuously evolve and focus on crucial regions of the Pareto front.
Experiments on benchmark and real-world problems show that EPS can accelerate the convergence of five state-of-the-art Pareto set learning algorithms compared to their original uniform sampling approaches, especially for problems with disconnected or degenerated Pareto fronts.
Sensitivity analysis reveals that the subset selection percentage and crossover/mutation probabilities have a significant impact on the performance of EPS. Selecting a small subset (5-10%) of high-performing preference vectors and using moderate crossover (0.9) and mutation (0.7) probabilities tend to work well across the tested problems.
A case study on the DTLZ7 problem further demonstrates the advantages of EPS, where it can achieve a more balanced distribution of solutions across the Pareto front compared to uniform sampling.
Stats
The paper does not provide any specific numerical data or metrics in the main text. The results are presented in the form of mean and standard deviation of the log hypervolume difference, which is a common performance metric for multi-objective optimization.
Quotes
The paper does not contain any direct quotes that are particularly striking or support the key arguments.