Core Concepts
GNN-based algorithm uncovers user unfairness explanations in recommendation systems through counterfactual reasoning.
Abstract
The article introduces GNNUERS, a novel algorithm focusing on fairness and explainability in recommendation systems. It leverages counterfactual methods to discover user unfairness explanations by perturbing the graph's topological structure. The perturbation vector is optimized to minimize utility disparity across demographic groups. Key highlights include:
Introduction of GNNUERS algorithm for fairness explanation in GNNs.
Utilization of counterfactual reasoning to identify user unfairness explanations.
Perturbation mechanism altering the bipartite graph to uncover disparities.
Evaluation based on real-world datasets from movie, music, grocery, and insurance domains.
Analysis of graph topological properties like degree, density, and intra-group distance for insights into unfairness.
Stats
GNNUERS updates the perturbation vector such that the removed user-item interactions from the graph lead the trained GNN to generate fairer recommendations.
Experiments on real-world graphs show that GNNUERS can systematically explain user unfairness on state-of-the-art GNN-based recommendation models.
Our method focuses on explaining unfairness at the model level rather than individual interactions.