Temel Kavramlar
Advancements in V2X technologies enable autonomous vehicles to share sensing information, leading to the creation of the V2X-Real dataset for cooperative perception research.
Özet
The V2X-Real dataset is introduced to facilitate research in Vehicle-to-Everything (V2X) cooperative perception. It includes LiDAR frames, camera data, and annotated bounding boxes. The dataset supports various collaboration modes and ego perspectives, providing a comprehensive benchmark for multi-class multi-agent methods.
Directory:
-
Introduction:
- Recent advancements in autonomous driving technology.
- Challenges with single-vehicle vision systems.
- Importance of V2X Cooperative Perception.
-
Related Work:
- Overview of existing self-driving datasets like KITTI and NuScenes.
- Introduction of V2V cooperative perception datasets like OPV2V.
-
V2X-Real Datasets:
- Data acquisition details using smart infrastructure and automated vehicles.
- Annotation process and strategies used for 3D bounding boxes.
-
Tasks:
- Description of the V2X Cooperative 3D object detection task.
- Metrics used for evaluation including Average Precision (AP) calculations.
-
Experiments:
- Implementation details for training models on the dataset.
- Benchmark results showcasing performance of different fusion strategies.
-
Conclusion:
- Summary of the significance of the V2X-Real dataset for future research in cooperative perception.
İstatistikler
The whole dataset contains 33K LiDAR frames and 171K camera data with over 1.2M annotated bounding boxes.
DAIR-V2X presents real-world datasets for Vehicle-to-Infrastructure (V2I) collaborations.
Existing datasets are limited by single collaboration mode involving at most two agents within the same spatial vicinity.