Kernkonzepte
The core message of this paper is to introduce a new family of surrogate losses specifically tailored for the multiple-expert setting, where the prediction and deferral functions are learned simultaneously. The authors prove that these surrogate losses benefit from strong H-consistency bounds, which are more relevant and advantageous than Bayes-consistency.
Zusammenfassung
The paper presents a study of surrogate losses and algorithms for the general problem of learning to defer with multiple experts. The key highlights are:
Introduction of a new family of surrogate losses specifically designed for the multiple-expert setting, where the prediction and deferral functions are learned simultaneously.
Proof that these surrogate losses benefit from strong H-consistency bounds, which are more relevant and advantageous than Bayes-consistency.
Illustration of the application of the analysis through several examples of practical surrogate losses, for which explicit guarantees are provided.
The H-consistency bounds incorporate the minimizability gap, which can lead to more favorable guarantees than bounds based on the approximation error.
Derivation of learning bounds for the deferral loss based on the H-consistency bounds and Rademacher complexity.
Experimental results on SVHN and CIFAR-10 datasets, demonstrating the positive correlation between the number of experts and the overall system accuracy.