The paper introduces a new method called SCARCE for complementary-label learning, addressing overfitting issues and providing superior performance. It explores the relationship between complementary-label learning and negative-unlabeled learning. Extensive experiments validate the effectiveness of SCARCE on synthetic and real-world datasets.
Existing approaches rely on uniform or biased distribution assumptions, which may not hold in real-world scenarios. The proposed SCARCE method does not require these assumptions, offering a more practical solution. The study also investigates the impact of inaccurate class priors on classification performance.
SCARCE outperforms state-of-the-art methods in most cases, demonstrating its robustness and effectiveness in various settings. The theoretical analysis provides insights into the convergence properties and calibration to 0-1 loss.
翻譯成其他語言
從原文內容
arxiv.org
深入探究