Conceptos Básicos
This paper introduces MVMS-RCN, a novel deep learning framework for sparse-view CT reconstruction that leverages a dual-domain unfolding approach with multi-view projection refinement and multi-scale geometric correction to achieve superior image quality compared to existing methods.
Estadísticas
The proposed MVMS-RCN method achieves an average PSNR of 43.22 for fan-beam projection data and an average PSNR of 43.78 for parallel-beam projection data.
The ablation study shows that incorporating full-sparse-view projection error refinement techniques significantly improves the reconstruction performance, with the complete MVMS-RCN model achieving the highest average PSNR of 43.22.
Sharing the network parameters of the multi-scale geometric correction module D across different stages leads to better performance (average PSNR of 43.22) compared to using unshared parameters (average PSNR of 42.97).