Core Concepts
The proposed Evolutionary Pareto Set Learning (EPSL) method can efficiently learn the entire Pareto set for multiobjective optimization problems, and can incorporate various structure constraints on the solution set.
Abstract
The paper presents a novel Evolutionary Pareto Set Learning (EPSL) method for solving multiobjective optimization problems (MOPs). The key highlights are:
EPSL can learn the entire Pareto set as a parameterized model, without requiring any Pareto-optimal solutions in advance. It gradually minimizes the corresponding subproblem values for different preferences to push the model towards the true Pareto set.
EPSL can incorporate various structure constraints on the solution set, such as shared components, learnable variable relationships, and predefined shapes. This allows decision-makers to flexibly trade off Pareto optimality with their preferred solution structures.
The authors conduct extensive experiments on 16 real-world multiobjective engineering design problems. The results show that EPSL can outperform several state-of-the-art multiobjective evolutionary algorithms in terms of hypervolume, while providing the entire Pareto set in a compact model form.
The proposed stochastic evolutionary gradient descent algorithm for EPSL is computationally efficient, with a runtime comparable to a single run of MOEA/D. Sampling solutions from the learned Pareto set model is also trivial.
Overall, EPSL provides a powerful framework for multiobjective optimization that can effectively handle both Pareto optimality and user-specified structure constraints on the solution set.
Stats
The paper does not provide any explicit numerical data or statistics. The key results are presented through visualizations of the Pareto sets and fronts.