The article introduces a comprehensive framework for assessing uncertainty calibration in object detection systems. It highlights the importance of reliable models in safety-critical applications like autonomous driving and robotics. The work proposes new evaluation metrics based on semantic uncertainty, focusing on IoU threshold-based evaluations. Experimental results show a consistent relation between mAP performance and uncertainty calibration metrics. Sensitivity tests reveal robust behavior of the proposed metrics to varying proportions of different types of detections. Distribution-shift experiments demonstrate the impact on uncertainty calibration metrics, with TTA showing promising results under shifted test data.
In un'altra lingua
dal contenuto originale
arxiv.org
Approfondimenti chiave tratti da
by Pedro Conde,... alle arxiv.org 03-19-2024
https://arxiv.org/pdf/2309.00464.pdfDomande più approfondite