Uncovering and Mitigating Implicit Ranking Unfairness in Large Language Models
Large language models exhibit substantial implicit ranking unfairness based solely on non-sensitive user profiles, which is more widespread and less noticeable than explicit unfairness, threatening the ethical foundation of LLM-based ranking applications.