The study aims to analyze and compare existing deep learning solutions for lung segmentation on X-ray images, with the goal of determining the most accurate and robust method.
The researchers merged two existing datasets - Montgomery County X-ray and Shenzhen Hospital X-ray - to create a diverse test set, including both normal and abnormal X-ray images with various manifestations of tuberculosis. They evaluated three deep learning models - Lung VAE, TransResUNet, and CE-Net - on the test set, applying five different image augmentations (contrast, random rotation, bias field, horizontal flip, and discrete "ghost" artifacts) to assess the models' performance under diverse conditions.
The analysis revealed that CE-Net outperformed the other two models, demonstrating the highest dice similarity coefficient and intersection over union (IoU) metrics, particularly in the presence of the challenging "random bias field" augmentation. TransResUNet exhibited limitations in accurately localizing the lungs in certain instances, while Lung VAE performed slightly worse than CE-Net but still significantly better than TransResUNet.
The findings highlight the importance of methodological choices in model development and the need for robust and reliable deep learning solutions for medical image segmentation tasks.
A otro idioma
del contenido fuente
arxiv.org
Consultas más profundas