The authors investigate the impact of training instance selection on automated algorithm selection (AAS) models for numerical black-box optimization problems. They use the recently proposed MA-BBOB function generator to create a large set of 11,800 functions in 2D and 5D dimensions.
The authors first analyze the complementarity of the generated functions and the original BBOB functions in terms of problem properties and algorithm performance. They find that the generated functions complement the BBOB set by filling unoccupied regions of the feature space, but the performance complementarity within their portfolio of 8 algorithms is limited.
The authors then evaluate three training instance selection methods - random sampling, greedy diversity-based selection, and using the original BBOB functions. They train XGBoost classification models on these training sets and evaluate their performance on various test sets.
The results show that the distribution of the training set relative to the test set is a crucial factor. Models trained on randomly sampled instances perform best on unseen test data, while models trained on greedily selected instances perform better on test sets with a similar distribution. Using the BBOB functions alone for training leads to poor generalization. Increasing the training set size can help mitigate the negative effects of non-matching training and test distributions.
The authors conclude that the choice of training instances is an important consideration for developing robust and generalizable AAS models, especially when the distribution of practical optimization problems is unknown.
إلى لغة أخرى
من محتوى المصدر
arxiv.org
الرؤى الأساسية المستخلصة من
by Konstantin D... في arxiv.org 04-12-2024
https://arxiv.org/pdf/2404.07539.pdfاستفسارات أعمق