PEARL introduces a novel conversational recommendation dataset synthesized with persona- and knowledge-augmented LLM simulators. The dataset addresses limitations in existing datasets by providing more specific user preferences, expertise in the target domain, and relevant recommendations. The dataset construction process involves grouping real-world reviews to extract detailed persona and item knowledge. Experimental results show that models trained on PEARL outperform those trained on human-annotated datasets in recommendation tasks. Human evaluation also indicates that PEARL is preferred over existing datasets for its quality and utility.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問