The paper proposes HybridNeRF, a hybrid surface-volume representation for efficient neural rendering. The key insights are:
Surface-based neural representations can render much more efficiently than fully volumetric NeRFs, requiring far fewer samples per ray. However, they struggle to model fine details, transparency, and view-dependent effects.
HybridNeRF addresses this by using a spatially-adaptive surfaceness parameter β(x) that allows most of the scene to be rendered as surfaces, while preserving volumetric modeling in challenging regions.
The authors use a distance-adjusted Eikonal loss to ensure the background is accurately reconstructed without degrading the foreground surface quality.
HybridNeRF also incorporates several rendering optimizations, such as hardware texture interpolation and sphere tracing, to achieve real-time frame rates (at least 36 FPS) at 2K×2K resolution.
Evaluated on the challenging Eyeful Tower dataset as well as other benchmarks, HybridNeRF achieves state-of-the-art reconstruction quality while significantly outperforming prior real-time methods in terms of both speed and fidelity.
To Another Language
from source content
arxiv.org
Deeper Inquiries