Core Concepts
A parameter-efficient model-based method named distribution-aware tuning (DAT) that adaptively selects and updates two small groups of trainable parameters to extract target domain-specific and task-relevant knowledge, effectively addressing issues of error accumulation and catastrophic forgetting during continual adaptation.
Abstract
The paper proposes a distribution-aware tuning (DAT) method for efficient and stable continual test-time adaptation (CTTA) in semantic segmentation tasks.
The key highlights are:
DAT adaptively selects two small groups of trainable parameters (around 5%) based on the degree of pixel-level distribution shifts in the target domain:
Domain-specific parameters (DSP) are fine-tuned to capture domain-specific knowledge and mitigate error accumulation.
Task-relevant parameters (TRP) are fine-tuned to avoid catastrophic forgetting.
The Parameter Accumulation Update (PAU) strategy is introduced to efficiently collect the DSP and TRP during the continual adaptation process. For each target domain sample, only a very small fraction of parameters (e.g., 0.1%) are selected and added to the parameter group until the distribution shift becomes relatively small.
Extensive experiments on two CTTA benchmarks, Cityscape-ACDC and SHIFT, demonstrate that DAT achieves competitive performance and efficiency compared to previous state-of-the-art methods, showcasing its effectiveness in addressing the semantic segmentation CTTA problem.
Stats
The paper does not provide any specific numerical data or statistics. The focus is on the proposed method and its evaluation on benchmark datasets.
Quotes
There are no direct quotes from the paper included in the summary.