Leveraging domain relations for out-of-domain generalization.
A2XP, a novel domain generalization method, preserves the privacy of the objective network architecture while achieving state-of-the-art performance by disentangling the problem into expert adaptation and attention-based generalization.
A novel prompt learning strategy that leverages deep vision prompts to address domain invariance while utilizing language prompts to ensure class separability, coupled with adaptive weighting mechanisms to balance domain invariance and class separability.
Quantization-aware training (QAT), typically used for model compression, can surprisingly enhance domain generalization in deep learning by guiding the optimization process towards flatter minima in the loss landscape, making models less susceptible to overfitting and more robust to unseen data distributions.
Integrating causal principles and Bayesian neural networks can improve the robustness of image recognition models against distribution shifts, outperforming traditional methods by disentangling domain-invariant features and mitigating overfitting.
본 논문에서는 딥러닝 모델의 도메인 일반화 능력을 향상시키기 위해 인과적 추론과 베이지안 신경망을 결합한 새로운 접근 방식을 제안합니다.
대규모 웹 데이터셋으로 훈련된 CLIP 모델의 뛰어난 성능은 훈련 데이터에 포함된 광범위한 도메인의 이미지 때문이며, 이는 모델이 실제로 OOD 일반화 능력을 갖췄다기보다는 훈련 데이터의 다양성에 의존한다는 것을 의미한다.
START, a novel state space model architecture, enhances domain generalization by using saliency-driven token-aware transformation to mitigate the accumulation of domain-specific features in input-dependent matrices.
본 논문에서는 입력 종속 행렬 내의 도메인 특정 특징이 모델 일반화를 저해할 수 있다는 것을 이론적으로 분석하고, 이를 해결하기 위해 Saliency-Driven Token-Aware Transformation(START)을 제안하여 도메인 일반화 성능을 향상시킵니다.
This research paper introduces a novel approach to address the issue of frequency shortcut learning in domain generalization by dynamically manipulating the frequency characteristics of training data using adversarial augmentation techniques.