The content discusses the problem of negative transfer in graph neural networks (GNNs). It is observed that unlike image or text datasets, negative transfer commonly occurs in graph-structured data, even when the source and target graphs share semantic similarities. This is attributed to the sensitivity of GNNs to graph structures, where differences in structural distribution between the source and target can lead to distinct marginal distributions of node embeddings.
To address this challenge, the authors introduce two methods: Subgraph Pooling (SP) and Subgraph Pooling++ (SP++). The key insight is that for semantically similar graphs, although structural differences lead to significant distribution shift in node embeddings, their impact on subgraph embeddings could be marginal. By transferring subgraph-level knowledge across graphs, SP and SP++ can effectively mitigate the negative transfer issue.
The authors provide a comprehensive theoretical analysis to explain how Subgraph Pooling reduces the discrepancy between the source and target graphs. They also conduct extensive experiments on various datasets, including Citation networks, Airport networks, Twitch networks, and Elliptic, to demonstrate the superiority of their methods under different transfer learning settings.
To Another Language
from source content
arxiv.org
Głębsze pytania