Core Concepts
The author proposes a Gabor-guided transformer for single image deraining to enhance local texture features and improve robustness to noise, outperforming state-of-the-art methods.
Abstract
The study introduces the Gabor-guided transformer (Gabformer) for single image deraining, addressing limitations of CNNs in capturing global information. By incorporating Gabor filter information into the query vector, the model focuses on local texture features, enhancing robustness to noise. The proposed method achieves superior results in benchmark experiments compared to existing approaches. The Gabformer architecture includes multi-Gabor self-attention, Gabor filter, and gated feed-forward network modules. Extensive experiments demonstrate the effectiveness and generalizability of the Gabformer in various rain scenes. The study highlights the importance of multi-scale Gabor filters in capturing edge and texture information comprehensively. Additionally, a gating module is introduced to suppress unimportant high-frequency information extracted by the Gabor filter, improving performance with reduced network parameters.
Stats
Our model has 34.4M parameters.
Initial learning rate set to 3 × 10^-4.
σ is set to 2π for the Gabor filter.
Four filters with different wavelengths used.
Quotes
"Our method outperforms state-of-the-art approaches."
"The proposed Gabformer achieves excellent results in a wide range of rain scenes."