Core Concepts
The core message of this article is to introduce an appeal-based partial label learning framework, PLCP, that enables mislabeled samples to appeal and be rectified, thereby enhancing the disambiguation ability of existing partial label learning approaches.
Abstract
The article introduces the concept of "appeal" in partial label learning (PLL), where each instance is associated with a set of candidate labels, among which only one is the ground-truth. Existing PLL methods primarily focus on constructing robust classifiers to estimate the labeling confidence of candidate labels in order to identify the correct one. However, these methods often struggle to identify and rectify mislabeled samples.
To address this issue, the authors propose the first appeal-based PLL framework, PLCP (Partial Label Learning with a Classifier as Partner). PLCP integrates an additional partner classifier that assists the base classifier in identifying and rectifying mislabeled samples, offering more precise and complementary information to the base classifier. The partner classifier is designed to specify the labels that should not be assigned to a sample, as the non-candidate label information is typically more precise yet often overlooked by existing PLL methods.
During mutual supervision, the labeling confidence is first updated based on the base classifier's modeling output, and then a blurring mechanism is applied to introduce uncertainty. This updated labeling confidence subsequently serves as the supervision information to interact with the partner classifier, whose final output, in turn, supervises the base classifier. The predictions of the two classifiers, while distinct, are inextricably linked, enhancing the disambiguation ability of this paradigm in two opposing ways. With this mutual supervision paradigm, the instances with disambiguation errors have a higher likelihood to appeal successfully.
Extensive experiments on various real-world datasets and deep learning benchmarks demonstrate that the appeal and disambiguation ability of several well-established stand-alone and deep-learning based PLL approaches can be significantly improved by coupling with the PLCP framework.
Stats
Each candidate label's labeling confidence is likely to continually increase or decrease until convergence.
For a false positive candidate label with a large labeling confidence, although its confidence may decrease properly, it could still be larger than the ground-truth one's.
The labeling confidence of a false positive candidate label keeps increasing and becomes the largest, which misleads the final prediction.
Quotes
"Once the labeling confidence of a false positive candidate label increases, it becomes difficult to decrease in the subsequent iterations."
"Even if the confidence of a false positive candidate label decreases appropriately, it may still be recognized as the ground truth one, as its initial labeling confidence remains large and continues to be greater than the confidence of the ground truth label upon convergence."