This article proposes a quantitative framework for designing fair algorithms, emphasizing the importance of distributive and procedural fairness in cybernetic societies where algorithms increasingly impact human lives.
Existing group fairness metrics in machine learning, while aiming to ensure equal outcomes across groups, may fail to account for systematic differences between groups, leading to potentially misleading fairness evaluations. This paper introduces Counterpart Fairness (CFair), a novel fairness index that addresses this issue by evaluating fairness on comparable individuals (counterparts) from different groups with similar baseline characteristics.
Debias-CLR, a novel contrastive learning framework, effectively mitigates demographic bias in healthcare AI by generating counterfactual examples, leading to fairer length of stay predictions without sacrificing accuracy.
Correcting discriminatory biases in AI systems, particularly in recruitment, is legally mandated and feasible under EU regulations like the GDPR and the AI Act, but practical compliance poses challenges and requires careful consideration of data processing practices.
The TowerDebias method leverages the Tower Property of conditional expectation to mitigate the influence of sensitive attributes in black-box machine learning models, improving fairness in predictions while managing the trade-off with predictive accuracy.
OxonFair is a new open-source toolkit that addresses limitations in existing algorithmic fairness toolkits by supporting NLP and computer vision tasks, emphasizing fairness enforcement on validation data to combat overfitting, and offering a highly customizable approach to optimize fairness measures alongside performance objectives.
Ensuring algorithmic fairness across intersecting social groups presents significant statistical and ethical challenges, particularly due to data scarcity, necessitating new fairness metrics that account for uncertainty and prioritize sufficient model performance for all groups.
This research paper leverages Partial Information Decomposition (PID) to analyze the complex relationships and tradeoffs between three prominent group fairness notions in machine learning: statistical parity, equalized odds, and predictive parity, revealing that achieving absolute fairness across all three is generally impossible.
This research paper introduces a novel approach to ensure fairness in algorithmic decision-making over time by incorporating the concept of "envy-freeness at the time of decision" (EFTD) within the framework of stochastic convex optimization.
This research paper introduces novel methods, FairBiT and FairLeap, for measuring and mitigating conditional demographic disparity in machine learning models using optimal transport, aiming to achieve conditional demographic parity even with complex legitimate features and continuous outcomes.