Core Concepts
Interactive visualizations help users identify and select images where computer vision models struggle, leading to improved performance.
Abstract
The study focuses on how interactive visualizations can assist in finding samples where computer vision models make mistakes. The authors present two interactive visualizations within the Sprite system for creating CV classification and detection models. These visualizations aim to help users identify and select images where a model is struggling, ultimately improving its performance. The study involved a usability test comparing baseline conditions with visualization conditions, showing that participants using the visualizations found more diverse examples of model errors. Results indicated that the visualizations enhanced user performance in identifying challenging images and led to significantly higher usability scores.
Stats
Participants captured images per error pattern on average was higher in the visualization (M = 20.27, SD = 16.14) than in the baseline condition (M = 6.36, SD = 7.59).
Participants captured images during the task that led to a particular error pattern on average was higher in the visualization (M = 5.63, SD = 2.46) than in the baseline condition (M = 3.63, SD = 2.5).
For the detection task, participants captured images per error pattern on average was higher in the visualization condition (M = 32.45, SD = 32.81) than in the baseline (M = 14.45, SD = 15.37).
Participants captured images during the task that led to a particular error pattern on average was higher in the visualization (M = 6.36, SD = 3.13) than in the baseline condition (M = 4.27, SD = 2.49).
Quotes
"Our results showed that participants in visualization condition found more images that contained prediction errors and more variety of error patterns for both classification and detection tasks."
"Participants using interactive visualizations can better assess a model’s prediction patterns globally and locally."