The study explores a framework using language models for new concept placement in ontologies. It involves three steps: edge search, formation and enrichment, and edge selection. Evaluation on datasets shows the effectiveness of leveraging neural methods like BERT and Large Language Models (LLMs) for this task.
The research addresses the challenge of inserting new concepts into an ontology by leveraging neural methods such as embedding-based techniques and contrastive learning with Pre-trained Language Models (PLMs). The study evaluates different data representation methods on datasets created using SNOMED CT ontology and MedMentions entity linking benchmark.
Results indicate that fine-tuned PLMs are effective for search while multi-label Cross-encoder performs well for selection. The study also suggests that Large Language Models (LLMs) show promise but require further investigation. Overall, the research demonstrates the potential of leveraging advanced language models for ontology concept placement.
To Another Language
from source content
arxiv.org
Viktige innsikter hentet fra
by Hang Dong,Ji... klokken arxiv.org 02-29-2024
https://arxiv.org/pdf/2402.17897.pdfDypere Spørsmål