TTT4Recは、テスト時のセルフ教師あり学習を通じて、モデルパラメータを継続的に更新することで、動的なユーザー行動に迅速に適応することができる。
非アイテムページは次のアイテム選択に有益な情報を提供し、次のアイテム予測の性能を向上させる。
The core message of this paper is to propose a novel learning paradigm, named Online Self-Supervised Self-distillation for Sequential Recommendation (S4Rec), which effectively bridges the gap between self-supervised learning and self-distillation methods to address the sparsity problem of user behavior data in sequential recommendation.
This paper proposes a novel pre-trained sequential recommendation framework, PrepRec, that can achieve zero-shot cross-domain and cross-application transfer without any auxiliary information.
LLaRA proposes a novel framework that integrates the behavioral patterns learned by traditional sequential recommender models with the world knowledge and reasoning capabilities of Large Language Models (LLMs) to enhance sequential recommendation performance.
Utilizing statistics-driven pre-training tasks to reduce the impact of random noise in user action sequences and stabilize the optimization of sequential recommendation models.