Verifiably Robust Conformal Prediction for Improved Uncertainty Quantification Under Adversarial Attacks
This paper introduces VRCP, a novel framework that leverages conformal prediction and neural network verification to construct prediction sets that maintain coverage guarantees for machine learning models, even in the presence of adversarial attacks.