Core Concepts
A novel distillation-free approach for efficient continual panoptic segmentation by leveraging visual prompt tuning and logit manipulation.
Abstract
The paper presents a novel method, ECLIPSE, for efficient continual learning in panoptic segmentation. The key highlights are:
ECLIPSE leverages Visual Prompt Tuning (VPT) to address the challenges of continual panoptic segmentation. It freezes the base model parameters and fine-tunes only a small set of prompt embeddings, effectively mitigating catastrophic forgetting and enhancing plasticity.
To tackle the inherent issues of error propagation and semantic drift in continual panoptic segmentation, the authors propose a simple yet effective logit manipulation strategy. This allows the model to leverage the inter-class knowledge of all learned classes to meaningfully update the no-object logit.
Comprehensive experiments on the ADE20K dataset demonstrate that ECLIPSE achieves a new state-of-the-art in continual panoptic segmentation, requiring only 1.3% of the total trainable parameters compared to previous distillation-based methods.
ECLIPSE also shows superior performance in continual semantic segmentation, outperforming previous methods that rely on distillation strategies or saliency maps.
The authors analyze the impact of various components of ECLIPSE, including the number of prompts, prompt tuning strategies, and the effect of logit manipulation. They also explore the potential of using more advanced frozen parameters, such as the Swin-L backbone, to further improve the performance.
Stats
The paper does not provide any specific numerical data or statistics in the main text. The key results are presented in the form of performance metrics on the ADE20K dataset for continual panoptic and semantic segmentation.
Quotes
The paper does not contain any striking quotes that support the key logics.