본 논문에서는 그래프 증강 기법이 그래프의 핵심 의미 정보를 유지하면서도 그래프 표현 학습에 필요한 변형을 주입해야 한다는 점을 강조하며, 설명 가능한 인공지능(XAI) 기법을 활용하여 의미 정보를 보존하는 그래프 증강 기법인 EPA(Explanation-Preserving Augmentation)를 제안합니다.
Semantics-preserving augmentation, particularly using a novel method called Explanation-Preserving Augmentation (EPA), significantly improves the performance of semi-supervised graph representation learning (GRL) by leveraging graph explanation techniques to retain essential substructures while introducing controlled variations.
This survey paper explores the potential of parametric graph representations, also known as graph laws, to address challenges and unlock new possibilities in graph representation learning, particularly in the context of developing and integrating with large-scale foundation models.
Integrating Large Language Models (LLMs) and Graph Structure Learning Models (GSLMs) significantly improves the robustness and accuracy of graph representation learning, especially in noisy or incomplete graph scenarios.
This paper introduces GPTrans, a novel transformer architecture for graph representation learning that leverages a Graph Propagation Attention (GPA) mechanism to effectively capture and propagate information among nodes and edges, leading to state-of-the-art performance on various graph-level, node-level, and edge-level tasks.
Target-Aware Contrastive Learning (Target-aware CL), specifically using XGSampler, enhances node representation learning in graphs by strategically selecting positive examples during contrastive learning based on the target task, thereby improving model generalization and performance on downstream tasks like node classification and link prediction.
Local Euler Characteristic Transforms (ℓ-ECTs) offer a more expressive and interpretable alternative to traditional graph neural networks (GNNs) for graph representation learning, particularly in tasks where preserving local structural information is crucial, such as graphs with high heterophily.
The proposed GRE2-MDCL model enhances graph representation learning by combining local-global graph augmentation, a triple graph neural network architecture, and multidimensional contrastive learning, leading to improved node classification performance.
Graph Representation Learning und semi-überwachtes Lernen sind effektive Ansätze zur Vorhersage von Fettlebererkrankungen.
Variational Graph Auto-Encoder (VGAE) improves semi-supervised graph representation learning by leveraging label information and self-label augmentation.