Bibliographic Information: Häntze, H., Xu, L., Mertens, C.J. et al. MRSEGMENTATOR: MULTI-MODALITY SEGMENTATION OF 40 CLASSES IN MRI AND CT. arXiv preprint arXiv:2405.06463v3 (2024).
Research Objective: This study aimed to develop and evaluate MRSegmentator, a deep learning model capable of automatically segmenting 40 anatomical structures in both MRI and CT images, addressing the limitations of existing organ-specific segmentation models.
Methodology: Researchers trained a nnU-Net based model on a dataset of 1,200 manually annotated 3D axial MRI scans from the UK Biobank, 221 in-house MRI scans, and 1228 CT scans from the TotalSegmentator dataset. A human-in-the-loop annotation workflow was employed, leveraging cross-modality transfer learning from an existing CT segmentation model. The model's performance was evaluated on three external datasets: the German National Cohort (NAKO) study (n=900), the AMOS22 dataset (n=60), and the TotalSegmentator-MRI test data (n=29). Segmentation quality was assessed using Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), and a novel vessel consistency (VC) metric.
Key Findings: MRSegmentator demonstrated high accuracy in segmenting various anatomical structures, achieving average DSCs ranging from 0.85 ± 0.08 for T2-HASTE sequences to 0.91 ± 0.05 for T1-weighted Dixon in-phase sequences on the NAKO dataset. The model performed well on both well-defined organs (lungs: DSC 0.96, heart: DSC 0.94) and organs with anatomical variability (liver: DSC 0.96, kidneys: DSC 0.95). MRSegmentator also generalized well to CT images, achieving a mean DSC of 0.84 ± 0.11 on the AMOS CT data. Comparison with the TotalSegmentator-MRI model showed superior or comparable performance across different datasets and anatomical structures.
Main Conclusions: MRSegmentator accurately and robustly segments multiple anatomical structures in both MRI and CT images, outperforming or matching the performance of existing models. This open-source model provides a valuable tool for automated multi-organ segmentation in medical imaging research and can potentially streamline clinical workflows.
Significance: This research significantly contributes to the field of medical image analysis by presenting a robust and versatile deep learning model capable of multi-organ segmentation in both MRI and CT images. The model's ability to handle anatomical variations and generalize across different datasets and imaging protocols makes it a valuable tool for various clinical and research applications.
Limitations and Future Research: The study acknowledges potential annotation bias introduced by the human-in-the-loop approach and the limited anatomical variety in the UK Biobank training data. Future research could focus on expanding the range of supported anatomical structures and pathological conditions while maintaining the model's cross-modality capabilities. Further investigation into the observed gender-based performance differences is also warranted.
Na inny język
z treści źródłowej
arxiv.org
Głębsze pytania