Core Concepts
A unified framework that adaptively enhances user representations and learns disentangled user preferences to improve cross-domain recommendation performance.
Abstract
The paper proposes a Unified Framework for Adaptive Representation Enhancement and Inversed Learning in Cross-Domain Recommendation (AREIL). The key highlights are:
Disentanglement-based Embedding Layer: The user representations are divided into domain-shared and domain-specific components to capture diverse user preferences across domains.
Adaptive Representation Enhancement Module (AREM):
Intra-domain AREM utilizes LightGCN to capture high-order collaborative information within each domain.
Inter-domain AREM employs self-attention to explore cross-domain relevance and adaptively transfer important and general factors.
Inversed Representation Learning Module (IRLM):
Domain classifiers and gradient reversal layers are introduced to learn disentangled user representations in a unified framework.
The inversed constraint objective ensures domain-shared and domain-specific representations encode complementary information.
Multi-task Learning: The entire framework is optimized through a joint loss function that combines recommendation performance and disentanglement constraints.
Extensive experiments on multiple datasets demonstrate the substantial improvement in recommendation performance achieved by AREIL compared to state-of-the-art baselines. Ablation studies and representation visualizations further validate the effectiveness of adaptive enhancement and inversed learning in cross-domain recommendation.
Stats
The paper uses three real-world recommendation datasets from Amazon:
Elec&Phone: 3,325 users, 17,709 items in Elec, 38,706 items in Phone
Sport&Phone: 4,998 users, 20,845 items in Sport, 13,655 items in Phone
Elec&Cloth: 15,761 users, 51,447 items in Elec, 48,781 items in Cloth