Selection bias in the preference elicitation stage can negatively impact the performance of subsequent item recommendations, but existing debiasing methods can help mitigate this effect.
The core message of this paper is to introduce a novel approach, Behavior-Contextualized Item Preference Modeling (BCIPM), for multi-behavior recommendation. The proposed Behavior-Contextualized Item Preference Network (BIPN) discerns and learns users' specific item preferences within each behavior, considering only those preferences relevant to the target behavior for final recommendations, thereby significantly reducing noise from auxiliary behaviors.
A novel diffusion model-based collaborative filtering method, CF-Diff, that effectively leverages high-order connectivity information to enhance recommendation accuracy.
Incorporating a recklessness regularization term in the learning process of probability-based recommender systems enables controlling the risk level of predictions, leading to improved quantity and quality of recommendations.
The core message of this paper is to introduce DT4IER, a novel Decision Transformer-based framework that effectively balances the optimization of immediate user engagement and long-term user retention in sequential recommendation scenarios.
Large Language Models (LLMs) can be leveraged to seamlessly integrate multiple recommendation tasks, including recall, ranking, and re-ranking, within a unified end-to-end framework, eliminating the need for specialized models and enabling efficient handling of large-scale item sets.
The core message of this paper is that the distortion of user similarity relationships across domains is a key factor causing negative transfer in cross-domain recommendation, and the proposed Collaborative information regularized User Transformation (CUT) framework can effectively alleviate this issue by directly filtering irrelevant source-domain collaborative information.
A novel positive-dominated negative synthesizing (PDNS) strategy that mitigates the over-fitting issue caused by false negatives in hard negative sampling for recommender systems.
The proposed Hard-BPR loss function mitigates the influence of false negatives in hard negative sampling, improving the robustness and effectiveness of recommendation model training.
Critics' wine ratings are more consistent and predictive of amateur tastes than amateur ratings, but combining ratings from both groups can further improve recommendation performance.