CodeEnhance leverages quantized priors and image refinement to enhance low-light images by learning an image-to-code mapping, integrating semantic information, adapting the codebook, and refining texture and color information.
PIE effectively learns from unpaired positive/negative samples and smoothly realizes non-semantic regional enhancement by incorporating physics-inspired contrastive learning and an unsupervised regional segmentation module.
The core message of this paper is to propose a new Digital-Imaging Retinex theory (DI-Retinex) that takes into account various factors affecting the validity of classic Retinex theory in digital imaging, such as noise, quantization error, non-linearity, and dynamic range overflow. Based on the DI-Retinex theory, the authors derive an efficient low-light image enhancement model that outperforms existing unsupervised methods.
A novel event-guided low-light image enhancement framework, EvLight, that selectively fuses event and image features in a holistic and region-wise manner to achieve robust performance, based on a large-scale real-world event-image dataset, SDE.