Core Concepts
Graph self-supervised learning tasks for pre-training graph foundation models are analyzed from a knowledge-based perspective.
Abstract
Abstract introduces the importance of self-supervised learning in pre-training graph foundation models.
Introduction highlights the evolution of techniques for mining graphs and the importance of SSL for task generalizability.
Section 1 discusses microscopic pretexts focusing on node features, properties, links, and context.
Section 2 defines basic concepts related to graphs and graph foundation models.
Section 3 covers microscopic pretexts like feature prediction, denoising, instance discrimination, and dimension discrimination.
Section 4 delves into macroscopic pretexts including long-range similarities, motifs, clusters, global structure, and manifolds.
Stats
Graph self-supervised learning is now a go-to method for pre-training graph foundation models.
There are a total of 9 knowledge categories and 25 pre-training tasks covered in the survey.
Quotes
"Graph self-supervised learning aims to solve the task generalization problem."
"Instance discrimination encourages abandoning shallow patterns for deeper semantic agreement."