Addressing Bias and Fairness in Large Language Models: A Comprehensive Survey
This survey provides a comprehensive overview of recent advances in addressing bias and promoting fairness in large language models (LLMs). It explores definitions of fairness, techniques for quantifying bias, and algorithms for mitigating bias at different stages of the LLM workflow. The survey also summarizes available resources, including toolkits and datasets, to facilitate further research and development of fair LLMs.