Grunnleggende konsepter
This paper delves into the complexities of federated unlearning, highlighting the need for tailored mechanisms due to unique differences in distributed learning. The research aims to offer insights and recommendations for future studies on federated unlearning.
Sammendrag
The paper explores the challenges and opportunities in federated unlearning, focusing on the need for specialized mechanisms due to differences between centralized and distributed learning. It categorizes existing techniques, compares assumptions made in literature, and discusses implications for future research.
Federated learning (FL) facilitates collaborative model training while respecting privacy regulations like GDPR. However, emerging privacy requirements may mandate model owners to forget some learned data. Many techniques developed for unlearning in centralized settings are not directly applicable to FL due to unique differences.
A recent line of work focuses on developing unlearning mechanisms tailored to FL. The paper aims to identify research trends and challenges in federated unlearning by categorizing papers published since 2020.
The study compares existing federated unlearning methods regarding influence removal and performance recovery, their threat models, assumptions, implications, limitations, and evaluation metrics.
Insights from this analysis aim to guide future research on federated unlearning.
Statistikk
Federated learning (FL) introduced in 2017 facilitates collaborative learning between non-trusting parties with no need for explicit data sharing.
Machine Unlearning (MU) enables the removal of specific samples or features from trained models upon request.
FL involves interactive training by iteratively aggregating local models on a server.
Non-IID data distribution across clients adds complexity to federated unlearning.
Exact unlearning ensures exact indistinguishability of distributions between unlearned and retrained models.
Sitater
"Unlearning using historical information could increase the correctness of the unlearned model." - Shao et al.
"Models are perturbed such that they fail to achieve the task for the target information." - FFMU