The content presents an interpretable nonlinear dimensionality reduction method called featMAP. The key highlights are:
FeatMAP aims to improve the interpretability of nonlinear dimensionality reduction by preserving both the manifold structure and the source features of the data.
It first approximates the manifold topological structure using a k-nearest neighbor (kNN) graph and computes the tangent space at each data point by local singular value decomposition (SVD).
FeatMAP then embeds the tangent space by preserving the alignment between tangent spaces of nearby data points. This allows the embedding to retain the source feature information.
Along the embedding tangent space, featMAP applies an anisotropic projection to embed the data points, which maintains the local density and similarity structure.
The embedding by featMAP provides a frame that locally demonstrates the source features and their importance, enabling interpretable dimensionality reduction.
Experiments on MNIST digit classification, Fashion MNIST and COIL-20 object detection show that featMAP utilizes the source features to successfully explain the classification and detection results.
FeatMAP is also applied to interpret MNIST adversarial examples, where it uses feature importance to explicitly explain the misclassification caused by the adversarial attack.
Quantitative comparisons with state-of-the-art dimensionality reduction methods demonstrate that featMAP achieves comparable performance on both local and global structure preservation metrics.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문