PEARL introduces a novel conversational recommendation dataset synthesized with persona- and knowledge-augmented LLM simulators. The dataset addresses limitations in existing datasets by providing more specific user preferences, expertise in the target domain, and relevant recommendations. The dataset construction process involves grouping real-world reviews to extract detailed persona and item knowledge. Experimental results show that models trained on PEARL outperform those trained on human-annotated datasets in recommendation tasks. Human evaluation also indicates that PEARL is preferred over existing datasets for its quality and utility.
翻譯成其他語言
從原文內容
arxiv.org
深入探究