PCoTTA is a novel framework designed to address the challenge of adapting pre-trained point cloud models to continually changing target domains in a multi-task setting, effectively mitigating catastrophic forgetting and error accumulation.
This paper introduces MDAA, a novel method for Multi-Modal Continual Test-Time Adaptation (MM-CTTA) that effectively addresses challenges like error accumulation, catastrophic forgetting, and reliability bias in dynamically changing target domains with multi-modal corruption.
Current continual test-time adaptation (TTA) methods, primarily evaluated on artificial datasets, struggle in real-world scenarios with natural domain shifts, often performing worse than a frozen source model.
A cascading paradigm that synchronously updates the feature extractor and main classifier at test-time, mitigating the mismatch between them and enabling long-term model adaptation. The pre-training is organized in a meta-learning framework to minimize interference between the main and self-supervised tasks, and encourage fast adaptation with limited unlabelled data.
Current test-time adaptation methods, including those designed for continual adaptation, eventually collapse and perform worse than a non-adapting, pretrained model when evaluated on long-term, continuously changing corruptions.
Proposing Adaptive Distribution Masked Autoencoders (ADMA) for continual self-supervised learning to enhance target domain knowledge extraction and mitigate distribution shift accumulation.