The content delves into the importance of performance in configurable software systems and how deep learning can enhance performance prediction. It discusses various preprocessing methods, encoding schemes, and sampling strategies used in deep configuration performance learning. The study emphasizes the need for accurate data preparation to improve the quality and reliability of deep learning models.
The authors conducted a systematic review covering 948 papers to analyze 85 primary studies on deep configuration performance learning. They identified key topics such as data preparation, model building, evaluation procedures, and model exploitation. The study provides insights into good practices, potential issues, and future research directions in this area.
Key findings include the prevalence of default datasets without preprocessing, normalization as a popular method for handling configuration data, label encoding as a common scheme for converting values, and random sampling as the dominant strategy for selecting configurations. The content highlights the importance of proper data preprocessing techniques to enhance the accuracy and effectiveness of deep learning models in predicting software performance.
翻譯成其他語言
從原文內容
arxiv.org
深入探究