Core Concepts
The proposed cross-validation conformal risk control (CV-CRC) method extends conformal prediction to provide calibrated uncertainty quantification guarantees for a broader range of risk functions, while improving efficiency compared to the existing validation-based CRC approach.
Abstract
The paper introduces a novel cross-validation-based conformal risk control (CV-CRC) method that generalizes the existing validation-based conformal risk control (VB-CRC) approach.
Key highlights:
CV-CRC partitions the available data into multiple folds, using leave-fold-out training and cross-validation to determine the prediction set threshold. This allows more efficient use of the limited data compared to VB-CRC, which requires a separate validation set.
CV-CRC provides theoretical guarantees on the average risk of the prediction set, similar to VB-CRC, but for a broader range of risk functions beyond just miscoverage probability.
Numerical experiments on vector regression and temporal point process prediction tasks demonstrate that CV-CRC can achieve lower average prediction set sizes compared to VB-CRC, especially when data is limited.
The core idea is to leverage cross-validation to reuse the available data more efficiently for uncertainty quantification, while maintaining the theoretical risk control guarantees of conformal prediction.
Stats
The paper does not provide specific numerical values or statistics to support the key claims. The results are presented in the form of empirical risk and inefficiency plots comparing VB-CRC and CV-CRC.