The paper argues that the current task formulation in recommender systems research often oversimplifies the problem by focusing on predicting missing values in a user-item interaction matrix, rather than capturing the dynamic and contextual nature of the user's decision-making process.
The key insights are:
Recommender systems should be viewed as dynamic processes that involve the user, the model, and the items, rather than a static prediction task. The user's decision-making is influenced by various contextual factors, which are often overlooked in academic research.
Recommender tasks are inherently application-specific, as the factors influencing user decision-making vary across different scenarios. Defining research tasks based on specific application scenarios using domain-specific datasets may lead to more insightful findings.
The mismatch between the inputs accessible to a model and the information available to users during their decision-making process is a key issue. Current datasets and evaluation protocols often fail to capture the necessary contextual information, leading to a disconnect between academic research and practical applications.
Recommender systems should be conceptualized as a ranking problem that considers both the user's general preferences and their current decision-making context. A balanced approach is needed to effectively model the dynamic nature of user interactions.
The paper concludes by emphasizing the need for more scenario-specific task formulations, compatible baselines, and evaluation settings that better simulate practical conditions, which will require the availability of high-quality datasets from real-world platforms.
To Another Language
from source content
arxiv.org
Deeper Inquiries