The paper introduces a novel framework, ALRO, that aims to enhance the ranking capabilities of Large Language Models (LLMs) for recommendation systems. The key highlights are:
Soft Lambda Loss (SLL): The authors propose a differentiable ranking score by combining the soft-argmax function with the traditional Lambda loss. This helps align the objectives of language generation and ranking tasks.
Permutation-Sensitive Learning (PSL): To address the position bias issue in LLM-based recommendation, the authors introduce a permutation-sensitive learning framework. This minimizes the output distribution distance between the original and permuted candidate lists during the fine-tuning stage, improving the model's permutation invariance.
Comprehensive Evaluation: The authors conduct extensive experiments on two real-world datasets, comparing ALRO against various state-of-the-art baselines in both embedding-based and LLM-based recommendation models. The results demonstrate the superior performance of ALRO in ranking tasks.
Ablation Study: The authors perform an ablation study to quantify the contributions of the individual components (SLL and PSL) within the ALRO framework.
Efficiency Analysis: The authors compare the performance and efficiency of ALRO against the bootstrapping method, showing that ALRO can achieve comparable outcomes while significantly reducing inference time.
Scalability: The authors investigate the adaptability of ALRO across different LLM parameter sizes, showcasing its consistent performance improvements over traditional supervised fine-tuning approaches.
Overall, the ALRO framework represents a significant advancement in leveraging LLMs for efficient and accurate list-wise recommendation, addressing key challenges such as objective alignment and position bias.
Sang ngôn ngữ khác
từ nội dung nguồn
arxiv.org
Thông tin chi tiết chính được chắt lọc từ
by Wenshuo Chao... lúc arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19181.pdfYêu cầu sâu hơn