The paper presents ULTRA, an approach for learning universal and transferable graph representations that can serve as a foundation model for knowledge graph reasoning. The key challenge in designing such foundation models is to learn transferable representations that enable inference on any graph with arbitrary entity and relation vocabularies.
ULTRA addresses this challenge by:
Constructing a graph of relations, where each node represents a relation type from the original graph. This captures the fundamental interactions between relations, which are transferable across graphs.
Learning relative relation representations conditioned on the query relation by applying a graph neural network on the relation graph. These conditional relation representations do not require any input features and can generalize to any unseen graph.
Using the learned relation representations as input to an inductive link predictor, which can then be applied to any knowledge graph.
Experiments show that a single pre-trained ULTRA model can outperform strong supervised baselines trained on specific graphs, both in the zero-shot and fine-tuned settings. ULTRA demonstrates promising transfer learning capabilities, where the zero-shot performance on unseen graphs often exceeds the baselines by up to 300%. Fine-tuning further boosts the performance.
The paper highlights the potential of ULTRA as a foundation model for knowledge graph reasoning, where a single pre-trained model can be applied to a wide range of knowledge graphs, reducing the need for training specialized models for each graph.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Mikhail Galk... at arxiv.org 04-11-2024
https://arxiv.org/pdf/2310.04562.pdfDeeper Inquiries