The paper aims to understand the logical expressivity of state-of-the-art Graph Neural Networks (GNNs) for knowledge graph reasoning. It first unifies popular GNN methods like NBFNet and RED-GNN into a common framework called QL-GNN, which scores triplets based on the tail entity representation.
The key insights are:
QL-GNN can learn rule structures described by formulas in the graded modal logic (CML) with the query entity as a constant. This class of rule structures includes chain-like rules and some more complex structures.
To learn rule structures beyond the capacity of QL-GNN, the paper proposes EL-GNN, which applies a labeling trick to additional entities in the graph besides the query entity. This allows EL-GNN to learn a broader class of rule structures.
Experiments on synthetic datasets validate the theoretical findings, showing that EL-GNN can learn rule structures that QL-GNN fails to capture. On real-world datasets, EL-GNN also demonstrates improved performance over QL-GNN methods.
The paper provides a formal analysis of the logical expressivity of state-of-the-art GNNs for knowledge graph reasoning, explaining their empirical success and inspiring novel GNN designs to learn more complex rule structures.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문