The content introduces a novel method, CPCA, for improving unsupervised domain adaptive semantic segmentation in high-resolution remote sensing imagery. It disentangles causal features, bridges domain-invariant causal features, and intervenes on bias features to enhance prediction accuracy. Experimental results show superior performance compared to existing methods.
Semantic segmentation of high-resolution remote sensing imagery faces domain shift challenges, impacting model performance in unseen domains. Unsupervised domain adaptive (UDA) methods aim to adapt models trained on labeled source domains to unlabeled target domains. Existing UDA models align pixels or features based on statistical information related to labels, leading to uncertainty in predictions. The proposed CPCA method explores invariant causal mechanisms between different domains and their semantic labels by disentangling causal and bias features, learning domain-invariant causal features, and generating counterfactual unbiased samples through intervention.
Deep learning methods have shown excellent performance in semantic segmentation tasks of high-resolution remote sensing imagery. However, limitations exist in modeling global contextual information and long-range dependencies due to fixed receptive fields in convolutional operations. Transformer-based methods have shown promise in extracting global contextual relationships for improved feature representation and pattern recognition.
Existing UDA methods rely on feature alignment through adversarial or contrastive learning but may exhibit uncertainty and vulnerability in prediction results due to chaotic phenomena interference. By considering underlying causal models rather than statistical dependencies alone, the proposed CPCA method aims to improve semantic segmentation performance across different domains of high-resolution remote sensing imagery.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문