The paper introduces a novel graph-based performance predictor called FR-NAS that utilizes both forward and reverse representations of neural architectures to enhance prediction accuracy. The key insights are:
Analyzing the feature embeddings from GNN predictors using only forward graph representations reveals that in the presence of limited training data, the encoder faces challenges in effectively capturing crucial features for precise predictions.
To address this, the authors propose a predictor that employs two separate GIN encoders to process the forward and reverse graph representations of neural architectures.
To ensure the two encoders converge towards shared features, a customized training loss is introduced that minimizes the discrepancy between the embeddings from the two encoders.
Comprehensive experiments on benchmark datasets including NAS-Bench-101, NAS-Bench-201, and DARTS search space demonstrate that the proposed FR-NAS outperforms state-of-the-art GNN-based predictors, especially with smaller training datasets, achieving 3%-16% higher Kendall-tau correlation.
Ablation studies further confirm the effectiveness of the dual graph representations and the tailored training loss in improving the predictor's performance.
לשפה אחרת
מתוכן המקור
arxiv.org
תובנות מפתח מזוקקות מ:
by Haoming Zhan... ב- arxiv.org 04-25-2024
https://arxiv.org/pdf/2404.15622.pdfשאלות מעמיקות