Large language models can enhance graph structure learning by denoising noisy connections and uncovering implicit node-wise dependencies.
Large Language Models (LLMs) enhance graph quality for Multivariate Time-Series (MTS) data representation learning.
Local Message Compensation (LMC) is a subgraph-wise sampling method with provable convergence, accelerating training efficiency for GNNs.
Standard-GNNs übertreffen spezialisierte Modelle bei der Bewertung unter Heterophilie.