The content discusses a novel method for real-time rendering using neural super-resolution with radiance demodulation. By separating lighting and material components, the method preserves rich texture details. A reliable warping module avoids ghosting artifacts, while a frame-recurrent neural network enhances temporal stability.
The paper compares the proposed method with existing techniques like NSRR, BasicVSR++, TTVSR, and RVRT. Results show superior performance in terms of quality metrics like PSNR and SSIM across various scenes. The method's efficiency is highlighted by its low parameter count and running time compared to other methods.
Ablation studies demonstrate the importance of radiance demodulation, motion mask generation, recurrent framework, and temporal loss in improving reconstruction quality and temporal stability. Generalization ability tests show that the method can be trained on multiple scenes for broader applicability.
Overall, the paper presents an innovative approach to real-time rendering that combines advanced techniques to achieve high-quality results efficiently.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문