The paper introduces PEARLM, a novel approach for explainable recommendation systems that leverages language modelling to capture user behaviour and product-side knowledge. The key innovations of PEARLM are:
The authors first conduct an empirical study on hallucination in KG-based explainable recommendation systems, highlighting its effect on user trust and the challenge of detecting inaccuracies.
PEARLM's training involves sampling user-centric paths from the KG and using a causal language model to predict the next token in the sequence. The model's architecture is designed to be sensitive to the sequential flow and hierarchical structure of KG paths, with a tailored 'masked' self-attention mechanism ensuring the generated predictions adhere to the chronological order and logical consistency of the paths.
During inference, PEARLM employs a novel Graph-Constrained Decoding (GCD) method that incorporates KG constraints directly into the sequence generation process, ensuring the resulting paths faithfully represent the actual KG structure.
Comprehensive experiments across MovieLens1M and LastFM1M datasets demonstrate PEARLM's significant improvements in recommendation utility, coverage, novelty, and serendipity compared to state-of-the-art baselines. The authors also analyze the impact of key modelling factors, such as dataset size, path length, and language model size, confirming PEARLM's scalability and effectiveness in various configurations.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Giacomo Ball... alle arxiv.org 05-01-2024
https://arxiv.org/pdf/2310.16452.pdfDomande più approfondite