Research on gender bias in machine translation heavily favors a few high-resource European languages, neglecting many African and even some European languages, highlighting a need for more diverse and inclusive research in the field.
This paper introduces MiTTenS, a novel dataset designed to evaluate and mitigate gender mistranslation in both dedicated translation systems and large language models, revealing systemic biases and paving the way for fairer and more inclusive language technologies.
Gender bias in machine translation leads to significant disparities in the time and technical effort required for users to post-edit translations, resulting in higher economic costs that can unfairly fall on different stakeholders.
Gender bias persists in commercial machine translation systems, impacting the accuracy of gender translations.