Core Concepts
EndoNet, a vision transformer-based deep learning model, shows promise in accurately classifying low- and high-grade endometrial cancer from histologic images.
Abstract
Authors: Manu Goyal, Laura J. Tafe, James X. Feng, Kristen E. Muller, Liesbeth Hondelink, Jessica L. Bentz, Saeed Hassanpour
Funding: Supported by US National Library of Medicine and US National Cancer Institute
Pages: 16
Figures: 3
Tables: 4
Corresponding Author: Manu Goyal
Abstract: Introduces EndoNet using convolutional neural networks and vision transformer for histologic classification of endometrial cancer.
Introduction: Discusses the importance of precise histologic evaluation and molecular classification in effective patient management.
Materials and Methods: Details datasets, data annotation, model development, and evaluation metrics.
Results: Compares the performance of Fully Supervised CNN and EndoNet on internal and external test sets.
Visualization: Shows attention maps of EndoNet in classifying low- and high-grade endometrial cancer.
Discussion and Future Directions: Acknowledges limitations, proposes future improvements, and outlines plans for clinical deployment.
Stats
The model achieved a weighted average F1-score of 0.91 and an AUC of 0.95 on the internal test.
On the external test, the model achieved an F1 score of 0.86 and an AUC of 0.86.
Quotes
"EndoNet has the potential to support pathologists without the need for manual annotations in classifying the grades of gynecologic pathology tumors."
"The model exhibited an increased focus or 'attention' towards regions that heavily overlap with endometrial cancer tissues independently segmented by expert pathologists."