The paper studies the problem of ensemble learning for graph neural networks (GNNs) in the semi-supervised setting. The key challenges are: 1) the poor inference ability of individual GNN models, and 2) the limited performance of GNN models when trained with few labeled nodes.
To address these challenges, the authors propose E2GNN, an efficient ensemble learner that compresses multiple GNN models into a simple multi-layer perceptron (MLP) student model. The key innovations are:
E2GNN develops a reinforced discriminator to selectively utilize the soft labels of unlabeled nodes from different GNN teachers. This allows the student model to learn from the correctly predicted nodes and filter out those incorrectly predicted by all GNN models.
The student MLP model enjoys MLPs-level inference speed while maintaining the merits of ensemble learning, such as improved performance and robustness.
Extensive experiments on benchmark datasets across different GNN backbones demonstrate the superiority of E2GNN. It not only outperforms state-of-the-art baselines in both transductive and inductive scenarios, but also shows good robustness against feature and topology perturbations.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Xin Zhang,Da... alle arxiv.org 05-07-2024
https://arxiv.org/pdf/2405.03401.pdfDomande più approfondite