Masked Attention: A Novel Approach to Enhance Interpretability of Vision Transformers in Computational Pathology
Masking background patches in the attention mechanism of Vision Transformers enhances model interpretability without compromising performance in computational pathology tasks.