The paper presents a three-stage encoder-decoder model for blind motion deblurring. The key highlights are:
The first stage focuses on extracting the features of the high-frequency component, which contains edge and texture information. The second stage concentrates on extracting the features of the low-frequency component, which represents the structural and content information of the image.
The third stage integrates the extracted low-frequency component features, the extracted high-frequency component features, and the original blurred image to recover the final clear image. This fusion of multi-category information effectively improves the motion deblurring performance.
A grouped feature fusion technique is developed to achieve richer, more three-dimensional and comprehensive utilization of various types of features at a deep level.
A multi-scale stripe attention mechanism (MSSA) is designed, which effectively combines the anisotropy and multi-scale information of the image, significantly enhancing the capability of the deep model in feature representation.
Extensive experiments on various datasets demonstrate that the proposed MCMS outperforms recently published state-of-the-art methods in both qualitative and quantitative evaluations.
To Another Language
from source content
arxiv.org
Principais Insights Extraídos De
by Nianzu Qiao,... às arxiv.org 05-03-2024
https://arxiv.org/pdf/2405.01083.pdfPerguntas Mais Profundas