This research paper introduces LAMP, a novel graph contrastive learning framework that leverages model pruning instead of data augmentation to improve performance and address limitations in existing methods.
LAC, a novel graph contrastive learning framework, improves the quality of node representation learning in unsupervised settings by introducing a learnable augmentation method in an orthogonal continuous space and employing an information-theoretic principle called InfoBal for effective pretext tasks.
TensorMV-GCL, a novel framework integrating tensor learning, graph contrastive learning, and extended persistent homology, outperforms existing methods in graph classification tasks by effectively capturing multi-scale structural and topological information from graphs.
SpeGCL, a novel spectral graph contrastive learning framework, improves performance in self-supervised graph representation learning by utilizing high-frequency information often overlooked by traditional methods and focusing on negative sample pairs for contrastive learning.
Graph Contrastive Invariant Learning improves graph representation by considering causal factors.