Negative sampling is a crucial but often overlooked component in recommendation systems, capable of revealing genuine negative aspects inherent in user preferences and improving model performance.
Self-supervised learning techniques can effectively address the data sparsity challenge in recommendation systems by leveraging unlabeled data to extract meaningful representations and make accurate predictions.
Bridging the knowledge gap between large language models and recommendation tasks by fine-tuning the models with auxiliary tasks that encode item correlations and user preferences.
KGUF effectively selects and integrates user-relevant semantic features from knowledge graphs during the graph learning phase to improve item representation and recommendation performance.
A novel framework that integrates soft lambda loss and permutation-sensitive learning to effectively align the objectives of language generation and ranking tasks, enabling Large Language Models to perform efficient and accurate list-wise recommendation.
Generating concise yet semantically rich textual IDs for recommendation items to enable seamless integration of personalized recommendations into natural language generation.
Collaborative retrieval-augmented LLMs improve long-tail recommendation by aligning reasoning with user-item interactions.
Desmoothing Framework (DGR) addresses over-smoothing in GCN-based recommendation models by considering global and local perspectives.
Users' interaction sequences contain noise, affecting recommendation accuracy. SSDRec proposes a three-stage framework to augment sequences and denoise effectively.
InteraRec introduces a novel recommendation framework using screenshots and large language models to provide personalized and effective recommendations to users.