核心概念
A blind motion deblurring network (MCMS) based on multi-category information and multi-scale stripe attention mechanism is proposed to effectively improve motion deblurring by fusing the edge information of the high-frequency component and the structural information of the low-frequency component.
摘要
The paper presents a three-stage encoder-decoder model for blind motion deblurring. The key highlights are:
-
The first stage focuses on extracting the features of the high-frequency component, which contains edge and texture information. The second stage concentrates on extracting the features of the low-frequency component, which represents the structural and content information of the image.
-
The third stage integrates the extracted low-frequency component features, the extracted high-frequency component features, and the original blurred image to recover the final clear image. This fusion of multi-category information effectively improves the motion deblurring performance.
-
A grouped feature fusion technique is developed to achieve richer, more three-dimensional and comprehensive utilization of various types of features at a deep level.
-
A multi-scale stripe attention mechanism (MSSA) is designed, which effectively combines the anisotropy and multi-scale information of the image, significantly enhancing the capability of the deep model in feature representation.
-
Extensive experiments on various datasets demonstrate that the proposed MCMS outperforms recently published state-of-the-art methods in both qualitative and quantitative evaluations.
統計資料
The paper reports the following key metrics:
On the GoPro dataset, MCMS achieves a PSNR of 33.87 and an SSIM of 0.9671, outperforming other methods.
On the RealBlur dataset, MCMS achieves a PSNR of 29.13 and an SSIM of 0.8936, again the best among the compared methods.
引述
"A blind motion deblurring network (MCMS) based on multi-category information and multi-scale stripe attention mechanism is proposed."
"The first stage focuses on extracting the features of the high-frequency component, the second stage concentrates on extracting the features of the low-frequency component, and the third stage integrates the extracted low-frequency component features, the extracted high-frequency component features, and the original blurred image in order to recover the final clear image."
"A grouped feature fusion technique is developed so as to achieve richer, more three-dimensional and comprehensive utilization of various types of features at a deep level."
"A multi-scale stripe attention mechanism (MSSA) is designed, which effectively combines the anisotropy and multi-scale information of the image, a move that significantly enhances the capability of the deep model in feature representation."