The article introduces a comprehensive framework for assessing uncertainty calibration in object detection systems. It highlights the importance of reliable models in safety-critical applications like autonomous driving and robotics. The work proposes new evaluation metrics based on semantic uncertainty, focusing on IoU threshold-based evaluations. Experimental results show a consistent relation between mAP performance and uncertainty calibration metrics. Sensitivity tests reveal robust behavior of the proposed metrics to varying proportions of different types of detections. Distribution-shift experiments demonstrate the impact on uncertainty calibration metrics, with TTA showing promising results under shifted test data.
На другой язык
из исходного контента
arxiv.org
Дополнительные вопросы