Exploiting Counterfactual Explanations for Efficient Model Extraction Attacks on Machine Learning as a Service Platforms
Counterfactual explanations can be exploited to perform efficient model extraction attacks on machine learning as a service platforms, and incorporating differential privacy into the counterfactual generation process can mitigate such attacks while preserving the quality of explanations.