The paper focuses on the problem of conformal prediction with conditional guarantees. Prior work has shown that it is impossible to construct nontrivial prediction sets with full conditional coverage guarantees. The authors propose PLCP, a framework that aims to improve the conditional validity of prediction sets by learning uncertainty-guided features from the calibration data.
The key algorithmic principles of PLCP are:
PLCP iteratively optimizes these two principles using the finite calibration data. The authors provide theoretical guarantees for the mean squared conditional error (MSCE) of the prediction sets constructed by PLCP in both the infinite and finite data regimes. They also derive implied coverage guarantees (both marginal and conditional) for PLCP.
The experimental results show that PLCP consistently outperforms the Split Conformal method in terms of conditional coverage and interval length across diverse datasets and tasks. PLCP also matches the performance of BatchGCP, which relies on predefined groups, and effectively identifies and covers additional meaningful groups.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Shayan Kiyan... alle arxiv.org 04-29-2024
https://arxiv.org/pdf/2404.17487.pdfDomande più approfondite