核心概念
LYT-Net, a lightweight transformer-based network, achieves state-of-the-art performance on low-light image enhancement tasks while maintaining high computational efficiency.
要約
The paper introduces LYT-Net, a novel approach for low-light image enhancement that leverages the YUV color space and transformer-based architecture. Key highlights:
- LYT-Net utilizes the natural separation of luminance (Y) and chrominance (U, V) in the YUV color space to simplify the task of disentangling light and color information.
- The model employs a multi-headed self-attention scheme on the denoised luminance and chrominance layers to achieve improved feature fusion.
- A hybrid loss function, comprising Smooth L1, perceptual, histogram, PSNR, color, and multi-scale SSIM losses, plays a critical role in the efficient training of LYT-Net.
- Extensive experiments on the LOL dataset demonstrate that LYT-Net achieves state-of-the-art performance while being significantly more computationally efficient than its counterparts.
- Qualitative results show that LYT-Net effectively enhances low-light images, balancing exposure and color fidelity, and outperforming other methods in terms of contrast and detail preservation.
統計
LYT-Net achieves PSNR of 22.38, 27.23, and 23.78 on the LOL-v1, LOL-v2-real, and LOL-v2-synthetic datasets, respectively.
LYT-Net achieves SSIM of 0.826, 0.853, and 0.921 on the LOL-v1, LOL-v2-real, and LOL-v2-synthetic datasets, respectively.
LYT-Net has a computational complexity of 3.49G FLOPS and only 0.045M parameters.
引用
"LYT-Net, a lightweight model that employs the YUV color space to target enhancements. It utilizes a multi-headed self-attention scheme on the denoised luminance and chrominance layers, aiming for improved fusion at the end of the process."
"A hybrid loss function was designed, playing a critical role in the efficient training of our model and significantly contributing to its enhancement capabilities."