The article introduces a comprehensive framework for assessing uncertainty calibration in object detection systems. It highlights the importance of reliable models in safety-critical applications like autonomous driving and robotics. The work proposes new evaluation metrics based on semantic uncertainty, focusing on IoU threshold-based evaluations. Experimental results show a consistent relation between mAP performance and uncertainty calibration metrics. Sensitivity tests reveal robust behavior of the proposed metrics to varying proportions of different types of detections. Distribution-shift experiments demonstrate the impact on uncertainty calibration metrics, with TTA showing promising results under shifted test data.
Naar een andere taal
vanuit de broninhoud
arxiv.org
Belangrijkste Inzichten Gedestilleerd Uit
by Pedro Conde,... om arxiv.org 03-19-2024
https://arxiv.org/pdf/2309.00464.pdfDiepere vragen