Core Concepts
Training models with multiple labels based on perceptual quality enhances reliability and generalizability.
Abstract
Annotator label uncertainty affects model reliability, leading to degradation in generalizability and prediction uncertainty. Existing algorithms struggle with this uncertainty. A novel framework uses perceptual quality to generate multiple labels for training, improving model performance without massive annotations. The method selects samples with low quality scores, assigns de-aggregated labels objectively, and demonstrates enhanced generalizability and prediction accuracy. The study highlights the impact of annotator disagreement across various fields like image classification, medical diagnosis, and seismic interpretation.
Stats
Pages={965–969}
DOI = {10.1190/image2023-3916384.1}
arXiv:2403.10190v1 [cs.CV] 15 Mar 2024
Quotes
"Annotators exhibit disagreement during data labeling, which can be termed as annotator label uncertainty."
"Training with a single low-quality annotation per sample induces model reliability degradations."
"Our experiments demonstrate that training with the proposed framework alleviates the degradation of generalizability and prediction uncertainty caused by annotator label uncertainty."