核心概念
A real-time traffic sign detection and voice narration system using a Convolutional Neural Network (CNN) model to assist drivers by detecting and narrating traffic signs, addressing issues like missing signs, lack of familiarity, and complex signs.
要約
The paper presents a voice-assisted real-time traffic sign recognition system that uses a Convolutional Neural Network (CNN) model for detection and recognition, followed by a text-to-speech engine to narrate the detected signs to the driver.
The system functions in two subsystems:
- Detection and recognition of traffic signs using a trained CNN model, specifically the YOLO (You Only Look Once) architecture.
- Narration of the detected traffic sign to the driver using a text-to-speech engine.
The key highlights of the system include:
- Robust, fast, and accurate traffic sign detection and recognition using deep learning techniques.
- Ability to assist drivers who may miss, not look at, or fail to comprehend traffic signs.
- Potential application in autonomous vehicle development.
The authors experimented with different YOLO network versions and configurations to optimize the detection speed and accuracy. The final model, based on YOLOv4-tiny, achieved a mean average precision of 64.71% at 55 frames per second, enabling real-time performance.
The system was tested on the German Traffic Sign Detection Benchmark (GTSDB) dataset and the Mapillary Traffic Sign Dataset, demonstrating its effectiveness in detecting traffic signs under various environmental and lighting conditions.
統計
The system was able to achieve a mean average precision of 64.71% at 55 frames per second.
引用
"The advantage of this system is that even if the driver misses a traffic sign, or does not look at the traffic sign, or is unable to comprehend the sign, the system detects it and narrates it to the driver."
"A system of this type is also important in the development of autonomous vehicles."