The paper introduces PEARLM, a novel approach for explainable recommendation systems that leverages language modelling to capture user behaviour and product-side knowledge. The key innovations of PEARLM are:
The authors first conduct an empirical study on hallucination in KG-based explainable recommendation systems, highlighting its effect on user trust and the challenge of detecting inaccuracies.
PEARLM's training involves sampling user-centric paths from the KG and using a causal language model to predict the next token in the sequence. The model's architecture is designed to be sensitive to the sequential flow and hierarchical structure of KG paths, with a tailored 'masked' self-attention mechanism ensuring the generated predictions adhere to the chronological order and logical consistency of the paths.
During inference, PEARLM employs a novel Graph-Constrained Decoding (GCD) method that incorporates KG constraints directly into the sequence generation process, ensuring the resulting paths faithfully represent the actual KG structure.
Comprehensive experiments across MovieLens1M and LastFM1M datasets demonstrate PEARLM's significant improvements in recommendation utility, coverage, novelty, and serendipity compared to state-of-the-art baselines. The authors also analyze the impact of key modelling factors, such as dataset size, path length, and language model size, confirming PEARLM's scalability and effectiveness in various configurations.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Giacomo Ball... lúc arxiv.org 05-01-2024
https://arxiv.org/pdf/2310.16452.pdfYêu cầu sâu hơn