Core Concepts
Incorporating self-supervised learning into Universal Domain Adaptation significantly improves performance, especially in extreme cases with many source-private classes, by mitigating bias in the feature extractor towards these classes.
Stats
Prior methods perform worse than training solely on source data when the number of source-private classes significantly exceeds the number of common classes.
Partial domain alignment only starts to decline when the noise rate exceeds 0.35 in low SPCR settings.
Different partial domain alignment methods have noise rates of around 0.25-0.3 in low SPCR settings.
The tolerance noise rate decreases to 0.2 in high SPCR settings (SPCR = 5).
The average noise rate in existing partial domain alignment methods is much higher than the tolerance noise rate in high SPCR settings.
Applying SSL exclusively on target common-class data leads to a relatively minor performance decline compared to applying SSL on the entire target dataset.
Training with SSL significantly reduces the noise rate in partial domain alignment.
UAN+SSL outperforms CMU by 10.1% on Office, 7.5% on DomainNet, and 17.8% on Office-Home, while surpassing DANCE by 20.4% on VisDA.
UniOT+SSL shows gains over UniOT, with increases of 2.8% on Office-Home, 11.2% on VisDA, 3.5% on Office, and 1.7% on DomainNet.