المفاهيم الأساسية
Existing invariance learning methods struggle to extract stable features across continuously indexed domains due to the limited samples per domain. Continuous Invariance Learning (CIL) addresses this challenge by aligning the conditional distribution of domain indices given the extracted features across different classes, enabling effective extraction of invariant features.
الملخص
The paper starts by identifying the limitations of existing invariance learning methods, such as Invariant Risk Minimization (IRM) and its variants, in handling continuously indexed domains. Theoretically, the authors show that when there are a large number of domains with limited samples per domain, existing methods like REx can fail to identify invariant features with constant probability.
To address this challenge, the authors propose Continuous Invariance Learning (CIL), a novel adversarial framework that extracts invariant features by aligning the conditional distribution of domain indices given the extracted features across different classes. This approach does not suffer from the issue of inaccurate estimation of the conditional distribution of the label given the features, which plagues existing methods in the continuous domain setting.
The authors provide a theoretical analysis demonstrating the advantages of CIL over existing IRM approximation methods in continuous domain tasks. They also conduct extensive experiments on both synthetic and real-world datasets, including an industrial application in Alipay and vision datasets from Wilds-time, showing that CIL consistently outperforms state-of-the-art baselines.