Core Concepts
Efficiently forgetting specific knowledge while maintaining model performance is crucial for privacy and bias reduction.
Abstract
The article introduces the concept of continual forgetting in pre-trained vision models to address privacy and bias concerns. It highlights the need for efficient deletion of unwanted knowledge while minimizing the impact on remaining knowledge. The proposed Group Sparse LoRA (GS-LoRA) method fine-tunes FFN layers using LoRA modules independently for each forgetting task. A group sparse regularization is adopted to automatically select specific LoRA groups, resulting in effective, parameter-efficient, and data-efficient forgetting. Extensive experiments on face recognition, object detection, and image classification demonstrate the effectiveness of GS-LoRA in forgetting specific classes with minimal impact on other classes.
Stats
Codes will be released on https://github.com/bjzhb666/GS-LoRA.
Face recognition accuracy: 73.78% before forgetting.
Object detection AP: 44.3 before forgetting.