Belangrijkste concepten
Enforcing fairness constraints on samples with reliable sensitive attribute predictions can significantly improve the fairness-accuracy tradeoff compared to using all samples or samples with uncertain sensitive attributes.
Samenvatting
The paper proposes a framework called FairDSR to handle fairness in machine learning when demographic information is partially available. The framework consists of two phases:
-
Uncertainty-Aware Sensitive Attribute Prediction:
- A semi-supervised approach is used to train an attribute classifier that predicts sensitive attributes and estimates the uncertainty of the predictions.
- The attribute classifier is trained using a student-teacher framework with a consistency loss to ensure the student model focuses on samples with low uncertainty.
- Monte Carlo dropout is used to estimate the uncertainty of the sensitive attribute predictions.
-
Enforcing Fairness with Reliable Proxy Sensitive Attributes:
- The label classifier is trained with fairness constraints, but these constraints are only applied to samples whose sensitive attributes are predicted with low uncertainty.
- Two additional variants are proposed: FairDSR (weighted) and FairDSR (uncertain). The weighted approach applies fairness constraints to all samples but weights them based on the uncertainty of the sensitive attribute predictions. The uncertain approach trains the model without fairness constraints but only on samples with higher uncertainty in the sensitive attribute predictions.
The experiments on five real-world datasets show that the proposed framework can significantly improve the fairness-accuracy tradeoff compared to existing methods that use proxy sensitive attributes or true sensitive attributes. The results also demonstrate the importance of the consistency loss in the attribute classifier and the impact of the uncertainty threshold on the fairness-accuracy tradeoff.
Statistieken
"Demographic information can be missing for various reasons, e.g., due to legal restrictions, prohibiting the collection of sensitive information of individuals, or voluntary disclosure of such information."
"The data in this setting can be divided into two sets: D1 and D2. The dataset D1 does not contain demographic information, while D2 contains both sensitive and non-sensitive information."
"Without demographic information in D1, it is more challenging to enforce group fairness notions such as statistical parity (Dwork et al., 2012) and equalized odds (Hardt et al., 2016)."
Citaten
"Enforcing fairness constraints on samples with uncertain demographic information can negatively impact the fairness-accuracy tradeoff."
"Our experiments on five datasets showed that the proposed framework yields models with significantly better fairness-accuracy tradeoffs than classic attribute classifiers."
"Surprisingly, our framework can outperform models trained with fairness constraints on the true sensitive attributes in most benchmarks."