The paper addresses the problem of out-of-distribution (OOD) generalization, where deep learning models typically suffer from performance degradation when tested on target domains with different distributions from the training data. The authors highlight that both domain-related features and class-shared features act as confounders that can mislead the model's predictions.
To address this issue, the authors propose the DICS (Domain-Invariant and Class-Specific) model, which consists of two key components:
Domain Invariance Testing (DIT): DIT learns and removes domain-specific features from each source domain to extract domain-invariant and class-related features. It also computes the similarity of extracted features of the same class across different domains to assess and enhance domain invariance.
Class Specificity Testing (CST): CST compares the input features with historical knowledge stored in an invariant memory queue to discern class differences. It optimizes the cross-entropy between the soft labels derived from the similarity matrix and the true labels, which enhances intra-class similarity and inter-class distinctiveness, thereby reinforcing class specificity.
The authors evaluate DICS on multiple datasets, including PACS, OfficeHome, TerraIncognita, and DomainNet, and demonstrate that it outperforms state-of-the-art methods in terms of accuracy. The visualizations further show that DICS effectively identifies the key features of each class in target domains, which are crucial for accurate classification.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Qiaowei Miao... at arxiv.org 09-16-2024
https://arxiv.org/pdf/2409.08557.pdfDeeper Inquiries