The authors address the problem of Long-Tailed Semi-Supervised Learning (LTSSL), where labeled data exhibit imbalanced class distribution and unlabeled data follow an unknown distribution. Unlike in balanced SSL, the generated pseudo-labels are skewed towards head classes, intensifying the training bias. This phenomenon is even amplified as more unlabeled data will be mislabeled as head classes when the class distribution of labeled and unlabeled datasets are mismatched.
To solve this problem, the authors propose a novel method named ComPlementary Experts (CPE). Specifically, they train multiple experts to model various class distributions, each of them yielding high-quality pseudo-labels within one form of class distribution. Besides, they introduce Classwise Batch Normalization for CPE to avoid performance degradation caused by feature distribution mismatch between head and non-head classes.
The authors evaluate CPE on CIFAR-10-LT, CIFAR-100-LT, and STL-10-LT dataset benchmarks. They show that CPE achieves state-of-the-art performances, improving test accuracy by over 2.22% compared to baselines on CIFAR-10-LT.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Chengcheng M... at arxiv.org 04-04-2024
https://arxiv.org/pdf/2312.15702.pdfDeeper Inquiries