แนวคิดหลัก
This paper introduces VRCP, a novel framework that leverages conformal prediction and neural network verification to construct prediction sets that maintain coverage guarantees for machine learning models, even in the presence of adversarial attacks.
Jeary, L., Kuipers, T., Hosseini, M., & Paoletti, N. (2024). Verifiably Robust Conformal Prediction. Advances in Neural Information Processing Systems, 38.
This paper addresses the vulnerability of Conformal Prediction (CP) methods to adversarial attacks, which can significantly reduce the coverage guarantees of predicted sets. The authors aim to develop a new framework called Verifiably Robust Conformal Prediction (VRCP) that leverages neural network verification techniques to provide robust and efficient prediction sets even under adversarial perturbations.