Core Concepts
A novel unsupervised tumor-aware distillation teacher-student network (UTAD-Net) that can accurately perceive and translate tumor areas to generate realistic multi-modal brain images without paired data.
Abstract
The content discusses the problem of obtaining fully paired multi-modal brain images in practice due to various factors, leading to modality-missing brain images. To address this, the authors propose an unsupervised tumor-aware distillation teacher-student network called UTAD-Net.
The key highlights are:
UTAD-Net consists of a teacher network and a student network. The teacher network learns an end-to-end mapping from source to target modality using unpaired images and corresponding tumor masks.
The translation knowledge is then distilled into the student network, enabling it to generate more realistic tumor areas and whole images without masks.
Experiments show that UTAD-Net achieves competitive performance on both quantitative and qualitative evaluations compared to state-of-the-art methods.
The generated images by UTAD-Net are also demonstrated to be effective for improving downstream brain tumor segmentation tasks.
Stats
Multi-modal brain images from MRI scans are widely used in clinical diagnosis to provide complementary information.
Obtaining fully paired multi-modal images is challenging due to various factors, resulting in modality-missing brain images.
Quotes
"Multi-modal brain images from MRI (Magnetic Resonance Imaging) scans are widely used in various clinical scenarios[1], [2]. These images are further divided into several modalities(sequences), such as T1-weighted (T1), T1-with-contrast-enhanced (T1ce), T2-weighted (T2), T2-fluid-attenuated inversion recovery (Flair), etc."
"Existing methods for multi-modal image translation have shown promising results in natural images. However, when applied to medical images, particularly brain tumor images, the results are often unsatisfactory[3]."