The paper proposes a novel framework called Instruction-based Hypergraph Pretraining (IHP) that leverages instruction-based prompts to enhance graph pretraining. Key highlights:
IHP constructs two hypergraphs - a target hypergraph and a context hypergraph - to distinguish between target nodes (present in both pretraining and downstream tasks) and context nodes (only in pretraining). This allows preserving prior knowledge in target nodes while capturing broader contextual patterns.
A novel Prompt Hypergraph Convolution (PHC) layer is devised to integrate text-based instructions into the hypergraph convolution process, enabling the model to capture high-order relations with task-specific guidance.
An instruction-based finetuning paradigm is designed to update both seen and unseen nodes in the downstream task, achieving a balance between retaining prior knowledge and adapting efficiently.
Extensive experiments on three real-world datasets demonstrate the superiority of IHP over various baselines in link prediction and node classification tasks, showcasing its effectiveness in leveraging instructions to enhance graph pretraining.
Egy másik nyelvre
a forrásanyagból
arxiv.org
Mélyebb kérdések