Efficient Online Unlearning via Hessian-Free Recollection of Individual Data Statistics
The core message of this paper is to propose an efficient and private online unlearning approach that departs significantly from prior works. The key idea is to analyze the difference between the learned and the retrained models after forgetting each sample via recollecting the trajectory discrepancy, and then demonstrate that these distinctions can be approximated and characterized through an affine stochastic recursion.