Interpretable and Human-Drawable Adversarial Attacks Provide Insights into Deep Neural Network Classifiers
Adversarial doodles, which are optimized sets of Bézier curves, can fool deep neural network classifiers even when replicated by humans, and provide describable insights into the relationship between the shape of the doodle and the classifier's output.