The paper introduces MODNESS, a model-driven framework for conceptualizing, designing, implementing, and executing fairness assessment workflows. MODNESS allows users to define their own bias and fairness concepts, going beyond pre-established definitions.
The key highlights are:
MODNESS is built on a tailored metamodel that supports the specification of bias definitions (including sensitive variables, privileged/unprivileged groups, and positive outcomes) and fairness analyses (with customizable metrics).
MODNESS automatically generates the implementation code to assess fairness based on the user-defined specifications, leveraging libraries like Pandas and AIF360.
The evaluation demonstrates MODNESS's expressiveness by modeling diverse use cases from social, financial, and software engineering domains. It also shows MODNESS's ability to overcome limitations of existing MDE-based approaches for fairness assessment.
MODNESS enables users to define and evaluate fairness concepts in emerging domains, such as mitigating popularity bias in recommender systems for software engineering and Arduino software component recommendations.
To Another Language
from source content
arxiv.org
Дополнительные вопросы