The paper introduces ProtoAL, a novel method that integrates an interpretable deep neural network (DNN) model, specifically the ProtoPNet architecture, into a deep active learning (DAL) framework. This approach aims to address two key challenges in the adoption of AI-based computer-aided diagnosis (AI-CAD) solutions in the medical imaging field:
The ProtoAL method leverages the DAL framework to train the ProtoPNet model using carefully selected instances from a large unlabeled dataset, reducing the need for full dataset labeling. The ProtoPNet model provides inherent interpretability through the use of prototypes, which share similar features with the input image and can be visually explained to domain experts.
The authors evaluated ProtoAL on the Messidor dataset for diabetic retinopathy classification, achieving an area under the precision-recall curve (AUPRC) of 0.79 while utilizing only 76.54% of the available labeled data. This demonstrates the ability of ProtoAL to achieve comparable performance to models trained on the full dataset, while providing interpretability and reducing the data labeling burden.
The paper also compares ProtoAL to baseline models, including a vanilla ResNet-18 and a standalone ProtoPNet, to assess the impact of the interpretability features and the DAL framework. The results show that ProtoAL can maintain a performance level similar to the ProtoPNet baseline while requiring fewer training instances, highlighting the benefits of the integrated approach.
The authors discuss the potential of ProtoAL to enhance the practical usability of AI-CAD solutions in the medical field, providing a means of trust calibration for domain experts and a suitable solution for learning in the data scarcity context often found in healthcare settings.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問