Leveraging Multimodal Information for Zero-Shot Relational Learning in Knowledge Graphs
The core message of this article is to propose a novel end-to-end framework, MRE, that integrates diverse multimodal information and knowledge graph structures to facilitate zero-shot relational learning, enabling the inference of missing triples for newly discovered relations without any associated training data.