Core Concepts
The author presents a deep white balancing model that leverages slot attention to generate chromaticities and weight maps for individual illuminants, achieving state-of-the-art performance in both single- and multi-illuminant WB benchmarks.
Abstract
The content introduces the Attentive Illumination Decomposition (AID) mechanism for multi-illuminant white balancing. The model uses slot attention to predict illumination at the pixel level, offering superior performance compared to previous methods. AID enables the prediction of individual illuminant chromaticity and weight maps separately, allowing for tunable white balance and illumination editing.
The paper discusses the limitations of existing multi-illuminant WB methods and proposes a novel approach that decomposes mixed illumination into individual illuminants. Through experiments on various datasets, including LSMI and NUS-8, AID demonstrates robustness and outperforms previous models in terms of accuracy and performance.
Key points include the introduction of centroid-matching loss to train slot attention-based models effectively, validation through comprehensive experiments on LSMI and MIIW datasets, and additional features like manipulatable chromaticity of each light source. The ablation study highlights the importance of centroid-matching loss, number of slots, and iterations in slot attention module for optimal performance.
Stats
Our method achieves state-of-the-art performance with an MAE of 1.66.
AID outperforms LSMI-U with an MAE of 1.07 on the MIIW dataset.
AID demonstrates an average MAE improvement from 2.85 to 1.19 on the LSMI Galaxy subset.
Quotes
"Our method generates more natural and ground truth-like WB results compared to previous approaches."
"AID accurately predicts the chromaticity and number of each illuminant in a scene."