The paper introduces a novel online machine unlearning method that can achieve low-complexity and near-instantaneous unlearning. The key contributions are:
Approximator based on affine stochastic recursion: The authors introduce an approximator to characterize the trajectory discrepancy between the learned and retrained models of each forgotten sample. This approximator is computed through affine stochastic recursion.
Hessian-free algorithm: Based on the approximator, the authors propose a near-instantaneous Hessian-free algorithm for online unlearning. It employs Hessian Vector Product (HVP) and gradient clipping techniques to mitigate the computation time and bound the approximation errors, respectively.
Theoretical analysis: The authors provide theoretical guarantees, showing that their proposed method can reduce the pre-computation time, storage, and unlearning time (per sample) to O(nd), O(nd), O(d), respectively, where d is the dimensionality of the model parameters and n is the total size of the dataset.
Experimental validation: The experiments demonstrate that the proposed method outperforms existing results in terms of time/memory costs and accuracy, across both convex and non-convex settings.
To Another Language
from source content
arxiv.org
Önemli Bilgiler Şuradan Elde Edildi
by Xinbao Qiao,... : arxiv.org 04-03-2024
https://arxiv.org/pdf/2404.01712.pdfDaha Derin Sorular