The key highlights and insights from the content are:
The authors introduce a novel framework for regression with deferral, where the learner can choose to defer predictions to multiple experts. This is an extension of the well-studied problem of learning to defer in classification contexts.
They present a comprehensive analysis for both the single-stage scenario (simultaneous learning of predictor and deferral functions) and the two-stage scenario (pre-trained predictor with learned deferral function).
The authors introduce new surrogate loss functions for both scenarios and prove that they are supported by strong H-consistency bounds. These bounds provide consistency guarantees that are stronger than Bayes consistency, as they are non-asymptotic and hypothesis set-specific.
The proposed framework is versatile, applying to multiple experts, accommodating any bounded regression losses, addressing both instance-dependent and label-dependent costs, and supporting both single-stage and two-stage methods.
The authors show that their single-stage formulation includes the recent regression with abstention framework as a special case, where only a single expert, the squared loss and a label-independent cost are considered.
Minimizing the proposed loss functions directly leads to novel algorithms for regression with deferral. The authors report the results of extensive experiments showing the effectiveness of their proposed algorithms.
toiselle kielelle
lähdeaineistosta
arxiv.org
Tärkeimmät oivallukset
by Anqi Mao,Meh... klo arxiv.org 03-29-2024
https://arxiv.org/pdf/2403.19494.pdfSyvällisempiä Kysymyksiä