Core Concepts
The authors propose a Step-by-step knowLedge dIstillation fraMework for recommendation (SLIM) to leverage the reasoning capabilities of Large Language Models (LLMs) in a resource-efficient manner.
Abstract
The paper introduces SLIM, a novel framework that distills knowledge from LLMs for sequential recommendations. It addresses challenges like user behavior complexity and resource requirements. SLIM uses CoT prompting to guide LLMs in generating rationales, enhancing recommendation reasoning. The smaller student model is fine-tuned with these rationales, improving performance across different backbones and reducing popularity bias. SLIM also demonstrates interpretability, robustness across user sparsity levels, and cost efficiency compared to existing LLM-based methods.
Stats
User behavior patterns are complex.
Prohibitively high resource requirements of LLMs.
SLIM has 7B parameters compared to 175B in LLMs like ChatGPT.
Costs for API calls with ChatGPT are $0.0015/1K tokens for input and $0.002/1K tokens for output.
Quotes
"Large language models open up new horizons for sequential recommendations."
"SLIM paves a promising path for sequential recommenders to enjoy the exceptional reasoning capabilities of LLMs in a 'slim' manner."