Khái niệm cốt lõi
Bridging the knowledge gap between large language models and recommendation tasks by fine-tuning the models with auxiliary tasks that encode item correlations and user preferences.
Tóm tắt
The content discusses a method to improve the performance of large language models (LLMs) in recommendation tasks by supplementing their fine-tuning with auxiliary tasks that mimic classical operations used in conventional recommender systems.
The key insights are:
- LLMs excel at natural language reasoning but lack the knowledge to model complex user-item interactions inherent in recommendation tasks.
- The authors propose generating auxiliary-task data samples that encode item correlations and user preferences through natural language prompts inspired by masked item modeling (MIM) and Bayesian personalized ranking (BPR).
- These auxiliary-task data samples are used along with more informative recommendation-task data samples (which represent user sequences using item IDs and titles) to fine-tune the LLM backbones.
- Experiments on retrieval, ranking, and rating prediction tasks across three Amazon datasets show that the proposed method significantly outperforms both conventional and LLM-based baselines, including the current state-of-the-art.
- Ablation studies demonstrate the effectiveness of the individual proposed auxiliary tasks and the robustness of the method across different LLM backbone sizes.
Thống kê
The content does not provide any direct numerical data or statistics. However, it mentions the following key figures:
Amazon Toys & Games dataset has 35,598 users
Average sequence length for Toys & Games is 8.63 ± 8.51, for Beauty is 8.88 ± 8.16, and for Sports & Outdoors is 8.32 ± 6.07
Trích dẫn
The content does not contain any direct quotes that support the key logics.