แนวคิดหลัก
Capturing the dimensional rationale from graphs can improve the discriminability and transferability of graph representations learned by contrastive learning.
บทคัดย่อ
The content discusses a novel graph contrastive learning framework called Dimensional Rationale-aware Graph Contrastive Learning (DRGCL) that aims to enhance the quality of learned graph representations.
Key highlights:
- Exploratory experiments show that preserving specific dimensions of graph embeddings can lead to better performance on downstream tasks compared to the primitive representations, suggesting the existence of dimensional rationale (DR) in graphs.
- The authors formalize a structural causal model to analyze the innate mechanism behind the performance improvement brought by the DR, and find that the acquired DR is a causal confounder in graph contrastive learning.
- DRGCL is proposed to acquire redundancy-against DRs and perform backdoor adjustment on the causal model, leading to consistent improvements in discriminability and transferability on various benchmarks.
- Solid theoretical analyses are provided to prove the validity of DRGCL, including the relationship between structural rationale and DR, and the guarantees for DRGCL's effectiveness.
สถิติ
Preserving specific dimensions of graph embeddings can lead to better performance on downstream tasks compared to the primitive representations.
The acquired dimensional rationale is determined as a causal confounder in graph contrastive learning.
คำพูด
"Does there exist a manner to explore the intrinsic rationale in graphs, thereby improving the GCL predictions?"
"We rethink the dimensional rationale in graph contrastive learning from the causal perspective and further formalize the causality among the variables in the pre-training stage to build the corresponding structural causal model."