The content discusses a novel approach to generative recommendation systems that aims to better align Large Language Models (LLMs) with the needs of recommendation tasks. The key insights are:
Current generative recommendation methods struggle to effectively encode recommendation items within the text-to-text framework using concise yet meaningful ID representations. This limits the potential of LLM-based generative recommendation systems.
The authors propose IDGenRec, a framework that represents each item as a unique, concise, semantically rich, platform-agnostic textual ID using human language tokens. This is achieved by training a textual ID generator alongside the LLM-based recommender.
The textual IDs generated by the ID generator are then seamlessly integrated into the recommendation prompt, enabling the LLM-based recommender to generate personalized recommendations in natural language form.
The authors address several challenges in this approach, including generating concise yet unique IDs from item metadata and designing a training strategy to enable effective collaboration between the ID generator and the base recommender.
Experiments show that the proposed framework consistently outperforms existing generative recommendation models on standard sequential recommendation tasks. Additionally, the authors explore the possibility of training a foundational generative recommendation model that can perform well on unseen datasets in a zero-shot setting, demonstrating the potential of the IDGenRec paradigm.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問