Alapfogalmak
The author explores how traditional graph contrastive learning may not effectively capture invariant representations due to non-causal information in graphs. By proposing a novel method, GCIL, inspired by causality, the author aims to improve the model's ability to learn invariant representations.
Kivonat
The content delves into the limitations of traditional graph contrastive learning methods in capturing invariant representations due to non-causal information. The proposed GCIL method introduces interventions on non-causal factors and incorporates invariance and independence objectives to enhance causal information extraction. Experimental results demonstrate superior performance compared to existing methods across various datasets.
Key Points:
Traditional graph contrastive learning may fail to capture invariant representations due to non-causal information.
The proposed GCIL method leverages causal interventions and objectives to improve learning of invariant representations.
Experimental results show GCIL outperforms existing methods on node classification tasks.
Statisztikák
Cora: 2,708 nodes, 10,556 edges, 7 classes, 1,433 features
Citeseer: 3,327 nodes, 9,228 edges, 6 classes, 3,703 features
Pubmed: 19,717 nodes, 88,651 edges, 3 classes, 500 features
Wiki-CS: 11,701 nodes, 432,246 edges, 10 classes, 300 features
Flickr: 7.575 nodes;479476 edges;9 classes;12.047 features
Idézetek
"The SCM offers two requirements and motives us to propose a novel GCL method."
"Experimental results demonstrate the effectiveness of our approach on node classification tasks."