核心概念
A novel Retinex-based Mamba architecture that leverages the computational efficiency of State Space Models and a Fused-Attention mechanism to effectively enhance low-light images while maintaining image quality.
要約
The paper introduces the RetinexMamba architecture, which combines the Retinex theory and the Mamba model to address the limitations of existing low-light image enhancement methods.
Key highlights:
- The architecture is divided into an Illumination Estimator and a Damage Restorer. The Illumination Estimator uses Retinex theory to separate the illumination and reflection components of the image, while the Damage Restorer employs the Illumination Fusion Visual Mamba (IFVM) to restore the image quality.
- The core component of IFVM is the Illumination Fusion State Space Model (IFSSM), which utilizes 2D Selective Scanning (SS2D) to achieve linear computational efficiency and an Illumination Fusion Attention (IFA) mechanism to enhance the interpretability of the attention process.
- Extensive experiments on the LOL dataset demonstrate that RetinexMamba outperforms existing deep learning methods based on Retinex theory in both quantitative and qualitative metrics, confirming its effectiveness and superiority in enhancing low-light images.
統計
The paper reports the following key metrics on the LOL dataset:
On LOL-v1, RetinexMamba achieved a PSNR of 24.025, SSIM of 0.827, and RMSE of 8.17.
On LOL-v2-real, RetinexMamba achieved a PSNR of 22.000, SSIM of 0.849, and RMSE of 9.53.
引用
"RetinexMamba not only captures the physical intuitiveness of traditional Retinex methods but also integrates the deep learning framework of Retinexformer, leveraging the computational efficiency of State Space Models (SSMs) to enhance processing speed."
"RetinexMamba replaces the IG-MSA (Illumination-Guided Multi-Head Attention) in Retinexformer with a Fused-Attention mechanism, improving the model's interpretability."