Core Concepts
A novel recurrent learning framework named Recurrent Structure-reinforced Graph Transformer (RSGT) that explicitly models the temporal states of edges in dynamic graphs to enhance the learning of node representations.
Abstract
The paper introduces a novel dynamic graph representation learning framework called Recurrent Structure-reinforced Graph Transformer (RSGT). The key highlights are:
RSGT models the temporal states of edges by assigning different edge types (emerging, persisting, disappearing) and weights based on the differences between consecutive snapshots. This allows RSGT to integrate the edge temporal states into the graph topological structure.
RSGT employs a structure-reinforced graph transformer to capture both the global semantic correlation between nodes and the topological dependencies, while also encoding the evolving edge temporal states. This concurrent feature extraction enhances the effectiveness of dynamic graph representation learning.
Extensive experiments on four real-world datasets demonstrate RSGT's superior performance compared to existing methods, especially in dynamic link prediction tasks. RSGT consistently outperforms competing approaches across various evaluation metrics.
The ablation study confirms the importance of explicitly modeling edge temporal states and integrating graph topological information into the transformer architecture for effective dynamic graph representation learning.
The analysis of key hyperparameters, such as shortest path distance, window size, number of encoding layers, and attention heads, provides insights into the design choices that contribute to RSGT's robust performance.
Stats
The paper does not provide any specific numerical data or statistics to support the key logics. The content focuses on describing the proposed RSGT framework and evaluating its performance compared to existing methods.
Quotes
There are no direct quotes from the content that support the key logics.