Kernkonzepte
Incorporating ID embeddings into LLMs improves recommendation system performance.
Zusammenfassung
This article introduces RA-Rec, an efficient ID representation alignment framework for LLM-based recommendation systems. It proposes a new paradigm, ID representation, to address limitations in existing approaches. The framework includes hybrid prompt construction, representation alignment, and efficient tuning. Extensive experiments demonstrate the effectiveness of RA-Rec in outperforming state-of-the-art methods.
Abstract:
- Large language models (LLM) are powerful tools for natural language processing tasks.
- Current approaches in LLM-based recommendation systems have limitations.
- RA-Rec proposes a new paradigm, ID representation alignment, to improve recommendation knowledge and uniqueness.
Introduction:
- Recommendation systems reduce information overload and provide relevant content.
- Integrating LLMs into RS as LLM-based RS is effective in cold-start and cross-domain transfer settings.
Methodology:
- Hybrid prompt construction combines soft prompts from pre-trained ID representations with hard prompts.
- Representation alignment module bridges the gap between ID representations and LLMs.
Experimental Setup:
- Evaluation metrics include HitRate@K, NDCG@10, and MRR@10 on Amazon Books and Clothing datasets.
Results:
- RA-Rec outperforms baseline models across various evaluation metrics.
Compatibility Evaluation:
- RA-Rec shows compatibility with different transformer-based architectures and ID-based models.
Effectiveness of Alignment Module:
- RA-Rec demonstrates superior performance compared to other alignment approaches.
Data Efficiency Study:
- Efficient data construction method improves data quality and leads to better alignment modeling.
Training Efficiency Comparison:
- RA-Rec achieves high performance with minimal computational resources compared to fine-tuning methods.
Statistiken
RA-rec significantly outperforms current state-of-the-art methods by achieving up to 3.0% absolute HitRate@100 improvements while utilizing less than 10× training data.
Zitate
"Integrating LLMs into RS as LLM-based RS is a valuable direction to explore."
"RA-rec demonstrates superior performance compared to other alignment approaches."