Core Concepts
The authors propose a fast and simple explainable AI method for point cloud data, emphasizing the importance of understanding network properties for safety-critical applications. By introducing Feature Based Interpretability (FBI), they achieve significant speedup and state-of-the-art results in classification explainability.
Abstract
The content introduces a novel explainability method for point cloud networks, highlighting the significance of interpreting network properties. The proposed FBI measure is based on pre-bottleneck features, showcasing superior performance compared to traditional methods. The analysis covers rotation invariance, robustness to outliers, domain shift, and the impact of supervised vs. self-supervised learning on influence maps.
Key points include:
Introduction of a fast and simple XAI method for point clouds.
Utilization of Feature Based Interpretability (FBI) for improved understanding.
Achieving state-of-the-art results in classification explainability.
Analysis of rotation invariance, outlier robustness, and domain shift effects.
Comparison between supervised and self-supervised learning approaches.
The study provides valuable insights into enhancing interpretability in point cloud networks through innovative methodologies.
Stats
Our approach achieves at least three orders of magnitude speedup compared to modern XAI methods.
Timing: FBI method is approximately constant regardless of network architecture.
Perturbation Test: FBI outperforms other baselines on 3 out of 4 examined networks.