Soft reasoning on uncertain knowledge graphs is explored through the introduction of soft queries with ML-based inference. The study focuses on addressing uncertainty in knowledge representation systems by incorporating soft requirements to control knowledge uncertainty. The proposed method, SRC, demonstrates superior performance compared to traditional query embedding methods like LogicE and ConE. Additionally, the impact of varying soft requirements on model performance is analyzed, showcasing the robust generalization capability of SRC across different settings. Furthermore, an evaluation framework comparing SRC with large language models highlights the effectiveness of neural-symbolic approaches in answering complex logical queries.
The dataset construction involves three standard uncertain knowledge graphs - CN15k, PPI5k, and O*NET20K - for training and testing soft query answering methods. Evaluation metrics such as MAP, NDCG, Spearman’s rank correlation coefficient ρ, and Kendall’s rank correlation coefficient τ are utilized to assess model performance across various query types. The study also investigates the impact of soft requirements (α and β) on model performance, demonstrating SRC's consistent performance across different settings compared to traditional query embedding methods.
The comparison with large language models reveals that SRC outperforms GPT-3.5-turbo and GPT-4-preview in accurately answering manually annotated queries derived from CN15k. Overall, the study emphasizes the importance of neural-symbolic approaches in efficiently inferring private information from industrial-scale uncertain knowledge graphs.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Weizhi Fei,Z... at arxiv.org 03-05-2024
https://arxiv.org/pdf/2403.01508.pdfDeeper Inquiries