المفاهيم الأساسية
The authors conduct a comprehensive survey of existing studies on fairness testing in ML software, focusing on the testing workflow and components. They aim to identify trends, research focus, and potential directions in the field.
الملخص
The content delves into the importance of fairness testing in Machine Learning (ML) software, highlighting the need to address unfair decision-making. It discusses various fairness definitions, test input generation techniques, and search-based approaches for uncovering discriminatory instances. The paper provides insights into the challenges and advancements in fairness testing within the realm of software engineering.
Key points include:
- Unfair behaviors in ML software have ethical implications.
- Fairness bugs can result from misalignment between desired conditions and actual outcomes.
- Different fairness definitions guide test input generation techniques.
- Search-based methods aim to efficiently generate discriminatory instances.
- Two-phase search frameworks are commonly used for generating individual discriminatory instances.
- Techniques like Themis, Aequitas, ExpGA, I&D, ADF, EIDIG, NeuronFair, and DICE are discussed.
The survey methodology involved keyword searches on DBLP followed by snowballing to collect relevant papers. Thematic synthesis was used for analysis with a focus on test input generation techniques.
الإحصائيات
89% of fairness testing publications emerged since 2019.
Cumulative number of publications on fairness testing has been increasing until 2023.
اقتباسات
"The unfair behaviors exhibited by ML software can have profound ethical implications." - Content
"Fairness bugs can result from misalignment between desired conditions and actual outcomes." - Content