The paper proposes a novel framework called Instruction-based Hypergraph Pretraining (IHP) that leverages instruction-based prompts to enhance graph pretraining. Key highlights:
IHP constructs two hypergraphs - a target hypergraph and a context hypergraph - to distinguish between target nodes (present in both pretraining and downstream tasks) and context nodes (only in pretraining). This allows preserving prior knowledge in target nodes while capturing broader contextual patterns.
A novel Prompt Hypergraph Convolution (PHC) layer is devised to integrate text-based instructions into the hypergraph convolution process, enabling the model to capture high-order relations with task-specific guidance.
An instruction-based finetuning paradigm is designed to update both seen and unseen nodes in the downstream task, achieving a balance between retaining prior knowledge and adapting efficiently.
Extensive experiments on three real-world datasets demonstrate the superiority of IHP over various baselines in link prediction and node classification tasks, showcasing its effectiveness in leveraging instructions to enhance graph pretraining.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Mingdai Yang... alle arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19063.pdfDomande più approfondite