The paper proposes a novel Adaptive Fair Representation Learning (AFRL) model for personalized fairness in recommendations. The key highlights are:
AFRL treats fairness requirements as inputs rather than hyperparameters, allowing it to adaptively generate fair embeddings for different users during the inference phase. This overcomes the challenge of the unacceptable training cost incurred by the explosion of attribute combinations in existing methods.
AFRL introduces an Information Alignment Module (IAlignM) that learns attribute-specific embeddings and a debiased collaborative embedding. This allows AFRL to exactly preserve the discriminative information of non-sensitive attributes and incorporate unbiased collaborative signals, achieving a better trade-off between fairness and accuracy compared to existing approaches that remove sensitive attribute information.
The extensive experiments and theoretical analysis demonstrate the superiority of AFRL over state-of-the-art personalized fairness models in terms of both fairness and accuracy.
Til et annet språk
fra kildeinnhold
arxiv.org
Viktige innsikter hentet fra
by XInyu Zhu,Li... klokken arxiv.org 04-12-2024
https://arxiv.org/pdf/2404.07494.pdfDypere Spørsmål