Novel method DAMA reduces gender bias in language models while maintaining performance.
Integrating the processes of locating and mitigating gender bias within a unified framework is essential for effective debiasing in large language models.
Projective methods can effectively reduce intrinsic and downstream bias in pre-trained language models, but intrinsic bias reduction does not guarantee downstream bias mitigation.