Temel Kavramlar
This paper formalizes the goals and elements of interpretability in machine learning for medical imaging (MLMI) from an applied perspective, in order to guide method design and improve real-world usage.
Özet
The paper introduces a framework for interpretability in MLMI, starting from the goals of medical image analysis and motivated by the need for various real-world considerations in this context, including trustworthiness, continual adaptation, and fairness.
The authors identify five core elements of interpretability in MLMI:
- Localizability: Where are the features, either spatially or temporally, that are driving the prediction?
- Visual Recognizability: What are the visual features that drive the prediction, described in a human-recognizable way?
- Physical Attribution: How are the image features connected to real-world physical quantities, measurements, or concepts?
- Model Transparency: How does the model produce its outputs, in a way that elucidates understanding to the human user?
- Actionability: What information does the model provide that leads to a course of action?
The authors then connect these elements to existing interpretability methods in the literature, in an effort to enable practical utility for clinicians and researchers. Finally, they discuss implications, limitations, and potential future exploration directions, aiming to foster deeper understanding and evaluation of interpretability methods in MLMI.
İstatistikler
"Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research."
"There is a general sense of murkiness in what interpretability means."
"We identify a need to formalize the goals and elements of interpretability in MLMI."
"We arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context."
Alıntılar
"Why does the need for interpretability in MLMI arise?"
"What does one actually seek when interpretability is needed?"
"Informed by the above two points, how can we formalize them into a framework for interpretability in MLMI?"