מושגי ליבה
The core message of this paper is to propose a novel Adaptive Fair Representation Learning (AFRL) model that achieves personalized fairness in recommendations by treating fairness requirements as inputs and learning attribute-specific embeddings and a debiased collaborative embedding, without compromising recommendation accuracy.
תקציר
The paper proposes a novel Adaptive Fair Representation Learning (AFRL) model for personalized fairness in recommendations. The key highlights are:
-
AFRL treats fairness requirements as inputs rather than hyperparameters, allowing it to adaptively generate fair embeddings for different users during the inference phase. This overcomes the challenge of the unacceptable training cost incurred by the explosion of attribute combinations in existing methods.
-
AFRL introduces an Information Alignment Module (IAlignM) that learns attribute-specific embeddings and a debiased collaborative embedding. This allows AFRL to exactly preserve the discriminative information of non-sensitive attributes and incorporate unbiased collaborative signals, achieving a better trade-off between fairness and accuracy compared to existing approaches that remove sensitive attribute information.
-
The extensive experiments and theoretical analysis demonstrate the superiority of AFRL over state-of-the-art personalized fairness models in terms of both fairness and accuracy.
סטטיסטיקה
The dataset contains more than 1 million movie ratings provided by 6,040 users with attributes Gender, Age, and Occupation.
The Taobao dataset comprises more than 26 million interactions between 1.14 million users and 840,000 advertisements, with user attributes Gender, Age, and Consumption level.
ציטוטים
"To meet diverse fairness requirements, Li et al. [24] propose to train a filter for each possible combination of sensitive attributes. Wu et al. [37] propose a model PFRec, which builds a set of prompt-based bias eliminators and adapters with customized attribute-specific prompts to learn fair embeddings for different attribute combinations."
"Creager et al. [7] propose FFVAE, a disentangled representation learning model that separates representations into sensitive and non-sensitive subspaces. It addresses personalized fairness by excluding from the learned fair embeddings the relevant semantic factors corresponding to different sensitive attributes specified by user fairness requirements."