The paper identifies two core root causes of mainstream bias in recommender systems: the discrepancy modeling problem and the unsynchronized learning problem. To address these issues, the authors propose the End-to-End Adaptive Local Learning (TALL) framework.
To tackle the discrepancy modeling problem, TALL integrates a loss-driven Mixture-of-Experts module that adaptively provides customized models for different users through an end-to-end learning procedure. The adaptive loss-driven gate module assigns high gate values to expert models that are effective for the target user and low values to less effective expert models.
To address the unsynchronized learning problem, TALL involves an adaptive weight module that dynamically adjusts weights in the objective function to synchronize the learning paces of different users. The adaptive weight module uses a loss change mechanism and a gap mechanism to make the weight computation more robust and stable.
Extensive experiments on three datasets demonstrate that TALL significantly outperforms state-of-the-art debiasing methods, enhancing utility for niche users by 6.1% over the best baseline with equal model complexity. The ablation study validates the effectiveness of the key components in the TALL framework.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問