Principled Approaches for Learning to Defer to Multiple Experts
The core message of this paper is to introduce a new family of surrogate losses specifically tailored for the multiple-expert setting, where the prediction and deferral functions are learned simultaneously. The authors prove that these surrogate losses benefit from strong H-consistency bounds, which are more relevant and advantageous than Bayes-consistency.