Combining model pruning with Low-Rank Adaptation (LoRA) offers a highly effective and efficient approach to machine unlearning, outperforming existing methods in balancing privacy, performance, and computational cost.
Machine unlearning is crucial for protecting user privacy and enhancing the security of machine learning models in the age of GDPR and growing privacy concerns.
This paper introduces a novel machine unlearning method for pre-trained models that leverages residual feature alignment using LoRA to efficiently and effectively remove the influence of specific data subsets while preserving performance on retained data.
본 논문에서는 머신 언러닝 과정에서 발생하는 유용성과 개인정보 보호 간의 균형을 찾기 위해 게임 이론을 활용한 새로운 머신 언러닝 알고리즘을 제안합니다.
This paper proposes a novel game-theoretic machine unlearning algorithm that balances the need for effective data removal from trained models with the imperative to mitigate potential privacy leakage risks inherent in the unlearning process.
본 논문에서는 민감한 데이터나 개인 정보가 포함된 데이터를 머신 러닝 모델에서 효율적이고 안전하게 삭제하는 새로운 방법인 의사 확률 기반 정보 삭제(PPU) 기법을 제안합니다.
Pseudo-Probability Unlearning (PPU) is a novel method that enables machine unlearning in a way that is both efficient and privacy-preserving by replacing the output probabilities of data meant to be forgotten with pseudo-probabilities, optimizing these probabilities, and then updating the model weights accordingly.
The RESTOR framework evaluates the ability of machine unlearning algorithms to not only forget unwanted information but also to restore a language model's original knowledge state, a concept termed "restorative unlearning."
Unlearning data from trained machine learning models is a complex task, significantly influenced by the entanglement of data to be retained and forgotten, and the memorization level of the data to be forgotten. The Refined-Unlearning Meta-algorithm (RUM) leverages these insights to improve unlearning by strategically partitioning and processing data subsets with tailored algorithms.
This paper proposes a novel method for class unlearning in neural networks that prioritizes user privacy and maintains model utility by leveraging layer-wise relevance analysis to identify and perturb neurons critical to the targeted unlearning class without requiring access to the original training data.