The content describes the development of a deep learning-based deformable image registration (DIR) method for aligning abdominal MRI and CT images. The proposed method assumes diffeomorphic deformations and leverages topology-preserved deformation features extracted from a probabilistic diffeomorphic registration model to accurately capture abdominal motion and estimate the deformation vector field (DVF). To enhance the deformable feature extraction, the method integrates Swin transformers, which have shown excellent performance in motion tracking, into a convolutional neural network (CNN)-based model.
The model is optimized using a combination of volume-based similarity for unsupervised training and surface matching for semi-supervised training. This dual optimization approach ensures that the generated DVF not only aligns the volumes but also matches the surfaces with a particular focus on organs-at-risk (OARs).
The performance of the proposed method is evaluated on a dataset of 50 liver cancer patients who underwent radiotherapy. Compared to rigid registration and other state-of-the-art deep learning-based DIR methods, the proposed method demonstrates significant improvements in target registration error, Dice similarity coefficient for the liver and portal vein, and mean surface distance of the liver. The incorporation of the Swin transformer is also shown to improve the registration accuracy compared to the CNN-based model without the transformer.
Till ett annat språk
från källinnehåll
arxiv.org
Djupare frågor