This survey provides a comprehensive overview of machine unlearning, covering both traditional models and large language models (LLMs). It defines the unlearning process, classifies unlearning approaches, and establishes evaluation criteria.
The key highlights include:
Data-driven machine unlearning: This approach involves restructuring the training dataset through techniques like data influence/poisoning, data partition, and data augmentation to facilitate the unlearning of specific data.
Model-based machine unlearning: This category focuses on directly adjusting the model parameters or architecture to counteract the effects of the data to be forgotten, including methods like model shifting, pruning, and replacement.
LLM unlearning: The survey examines two main categories of LLM unlearning - parameter-tuning and parameter-agnostic. Parameter-tuning methods optimize model parameters to selectively modify the LLM's behavior, while parameter-agnostic techniques, such as in-context unlearning (ICuL), treat the LLM as a black-box and leverage context-based learning to forget the data.
Evaluation criteria: The survey discusses various empirical and theoretical evaluation metrics, including time-based, accuracy-based, similarity-based, and attack-based measures, to assess the effectiveness and efficiency of unlearning techniques.
Challenges and future directions: The survey highlights the need for more efficient and accurate unlearning algorithms, the development of standardized evaluation metrics, and the consideration of legal and ethical implications in the context of machine unlearning.
Overall, this survey provides a comprehensive and up-to-date understanding of the field of machine unlearning, serving as a valuable resource for researchers, practitioners, and policymakers working in the intersection of machine learning, privacy, and security.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問