The paper introduces DyNFL, a neural field-based method for high-fidelity re-simulation of LiDAR scans in dynamic driving scenes. The key contributions are:
Scene Decomposition: The scene is decomposed into a static background and N dynamic vehicles, each modeled using a dedicated neural field.
Neural Field Composition: A novel composition technique is proposed to effectively integrate the reconstructed neural assets from various scenes, accounting for occlusions and transparent surfaces. This enables flexible scene editing capabilities.
SDF-based Volume Rendering: The method employs a signed distance function (SDF)-based volume rendering formulation to accurately model the physical LiDAR sensing process, improving the realism of the re-simulated scans.
Evaluation: DyNFL is evaluated on both synthetic and real-world datasets, demonstrating substantial improvements in dynamic scene LiDAR simulation compared to baseline methods. It offers a combination of physical fidelity and flexible editing capabilities.
The paper first provides an overview of the DyNFL pipeline, which takes LiDAR scans and tracked bounding boxes of dynamic vehicles as input. It then decomposes the scene into a static background and dynamic vehicles, each represented by a dedicated neural field. A key innovation is the neural field composition technique, which integrates the reconstructed neural assets while accounting for occlusions and transparent surfaces.
The paper then details the SDF-based volume rendering formulation used to accurately model the LiDAR sensing process. This is followed by the optimization procedure to train the neural scene representation.
The experimental evaluation demonstrates that DyNFL outperforms baseline methods in terms of range and intensity estimation, as well as perceptual fidelity. It also enables various scene editing capabilities, such as altering object trajectories, removing, and adding objects, showcasing its flexibility.
他の言語に翻訳
原文コンテンツから
arxiv.org
深掘り質問