The author introduces a novel self-attention mechanism within a prototype learning paradigm to enhance the explainability of medical diagnoses using transformers.