Core Concepts
HybridNeRF leverages the strengths of both surface and volume representations to enable high-quality, real-time rendering of complex scenes with fine details, reflections, and transparency.
Abstract
The paper proposes HybridNeRF, a hybrid surface-volume representation for efficient neural rendering. The key insights are:
Surface-based neural representations can render much more efficiently than fully volumetric NeRFs, requiring far fewer samples per ray. However, they struggle to model fine details, transparency, and view-dependent effects.
HybridNeRF addresses this by using a spatially-adaptive surfaceness parameter β(x) that allows most of the scene to be rendered as surfaces, while preserving volumetric modeling in challenging regions.
The authors use a distance-adjusted Eikonal loss to ensure the background is accurately reconstructed without degrading the foreground surface quality.
HybridNeRF also incorporates several rendering optimizations, such as hardware texture interpolation and sphere tracing, to achieve real-time frame rates (at least 36 FPS) at 2K×2K resolution.
Evaluated on the challenging Eyeful Tower dataset as well as other benchmarks, HybridNeRF achieves state-of-the-art reconstruction quality while significantly outperforming prior real-time methods in terms of both speed and fidelity.
Stats
The paper does not provide specific numerical data points, but makes the following key claims supported by the results:
HybridNeRF improves error rates by 15-30% compared to state-of-the-art baselines.
HybridNeRF renders at real-time framerates of at least 36 FPS for virtual-reality resolutions (2K×2K).