This paper introduces a new method called Learning to Bootstrap (L2B) for learning with noisy labels. The key idea is to use meta-learning to dynamically adjust the weights between the real observed labels and the model's own predictions (pseudo-labels) during training, as well as the weights of individual training samples.
The paper first discusses the limitations of existing approaches like the bootstrapping loss method, which uses a fixed weighted combination of true and pseudo labels. L2B instead introduces a more flexible loss function that allows the weights of the true and pseudo labels, as well as the sample weights, to be dynamically adjusted based on performance on a clean validation set.
The authors show that this formulation can be reduced to the original bootstrapping loss, effectively conducting implicit relabeling of the training data. Through meta-learning, L2B is able to significantly outperform baseline methods, especially under high noise levels, without incurring additional computational cost.
The paper also demonstrates that L2B can be effectively integrated with existing noisy label learning techniques like DivideMix, UniCon, and C2D, further boosting their performance. Experiments are conducted on various natural and medical image datasets, including CIFAR-10, CIFAR-100, Clothing 1M, and ISIC2019, covering different types of label noise and recognition tasks. The results highlight L2B's superior performance and robustness compared to contemporary label correction and meta-learning approaches.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問