מושגי ליבה
A decoupled neural rendering framework that can reconstruct clean scenes from multi-view rainy images in an unsupervised manner by effectively separating high-frequency scene details from rain streaks.
תקציר
The paper proposes RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images. The framework consists of two main modules:
A neural rendering module that obtains a low-frequency representation of the scene.
A rain-prediction module that incorporates a predictor network and a learnable latent embedding to capture the rain characteristics of the scene.
The key contributions are:
Leveraging the spectral bias property of neural networks, the framework first optimizes the neural rendering pipeline to obtain a low-frequency scene representation.
It then jointly optimizes the two modules, driven by an adaptive direction-sensitive gradient-based reconstruction loss, which encourages the network to distinguish between scene details and rain streaks.
The framework can be readily adapted to work with various rendering techniques, demonstrating its versatility and flexibility.
To address the lack of multi-view rainy scene datasets, the authors render 10 sets of scenes using Maya, resulting in more consistent and realistic rain trails compared to data simulated by simple methods.
Extensive experiments on both the classic neural radiance field and the recently proposed 3D Gaussian splatting demonstrate the superiority of the proposed method in effectively eliminating rain streaks and rendering clean images, achieving state-of-the-art performance.