Core Concepts
Prob-PSENNs enhance model explainability and reliability by introducing probabilistic prototypes, capturing uncertainty for more robust explanations.
Abstract
The content introduces Probabilistic Self-Explainable Neural Networks (Prob-PSENNs) to address the lack of transparency in Deep Neural Networks. It discusses Prototype-Based Self-Explainable Neural Networks (PSENNs) and their limitations, leading to the introduction of Prob-PSENNs. The paper explains how Prob-PSENNs replace point estimates with probability distributions for prototypes, enabling end-to-end learning and capturing explanatory uncertainty. It details the architecture, training procedure, and experiments conducted to evaluate the effectiveness of Prob-PSENNs on various datasets like MNIST, Fashion-MNIST, and K-MNIST. The results show that Prob-PSENNs provide more meaningful explanations and enhanced reliability compared to non-probabilistic counterparts.
Introduction
Lack of transparency in Deep Neural Networks.
Prototype-Based Self-Explainable Neural Networks (PSENNs).
Introduction of Probabilistic Self-Explainable Neural Networks (Prob-PSENNs).
Core Concepts
Limitations of PSENNs.
Introduction of Prob-PSENNs with probabilistic prototypes.
Training Procedure
Optimization using a generalized loss function.
Experiments
Evaluation on MNIST, Fashion-MNIST, and K-MNIST datasets.
Conclusion
Benefits of Prob-PSENNs for explainability and reliability.
Stats
Prob-PSENN replaces point estimates with probability distributions for prototypes.
Quotes
"Promising approaches to overcome limitations are Prototype-Based Self-
Explainable Neural Networks."
"Prob-PSENN provides more meaningful explanations than non-probabilistic counterparts."
"Replacing point estimates with probability distributions captures explanatory uncertainty."