Core Concepts
A lightweight machine unlearning method is proposed to efficiently remove a subset of a client's training data from the federated learning model for human activity recognition, without compromising the model's performance on the remaining data.
Abstract
The content discusses the challenges of privacy protection in Human Activity Recognition (HAR) and how Federated Learning (FL) can help mitigate these issues. However, even with FL, security and privacy concerns persist, especially with the emergence of regulations like GDPR that grant users the right to be forgotten.
The key highlights are:
Existing methods for unlearning data in FL, such as retraining, are resource-intensive.
The authors propose a lightweight unlearning method that uses a third-party dataset to fine-tune the model and align the predicted probability distribution on the forgotten data with the third-party dataset.
This approach aims to achieve unlearning while preserving the model's performance on the remaining client data.
The authors also introduce a membership inference evaluation method to assess the effectiveness of the unlearning process.
Experiments on HAR and MNIST datasets show that the proposed method achieves unlearning accuracy comparable to retraining methods, with speedups ranging from hundreds to thousands.
Stats
The content does not provide any specific numerical data or metrics to support the key claims. It focuses more on the conceptual framework and methodology of the proposed unlearning approach.
Quotes
The content does not contain any direct quotes that are particularly striking or support the key arguments.