Core Concepts
DrivingGaussian introduces Composite Gaussian Splatting to efficiently represent dynamic autonomous driving scenes, outperforming existing methods and enabling high-quality synthesis of surrounding views.
Abstract
Introduction:
Representing large-scale dynamic scenes is crucial for autonomous driving tasks.
Challenges arise from sparse sensor data and high-speed movements.
Neural Radiance Fields:
NeRF limitations in unbounded scenes due to consistent distance requirements.
Extensions like Mip-NeRF and Urban-NeRF address large-scale static scenes.
3D Gaussian Splatting:
Original method excels in static scenes but struggles with dynamics.
Extensions like Dynamic 3D-GS focus on modeling dynamic objects.
Method:
DrivingGaussian hierarchically models scenes using Incremental Static Gaussians and a Composite Dynamic Gaussian Graph.
Experiments:
Outperforms state-of-the-art methods in dynamic scene reconstruction and view synthesis.
Ablation Study:
Importance of each module highlighted, showing the effectiveness of LiDAR prior and proposed loss functions.
Corner Case Simulation:
Demonstrates the ability to simulate challenging scenarios accurately.
Stats
"DrivingGaussian enables the high-quality synthesis of surrounding views across multi-camera."
"Our method achieves state-of-the-art performance on public autonomous driving datasets."