The research focuses on developing an automated system to address the persistent problem of littering in public places. Traditional approaches relying on manual intervention and witness reporting suffer from delays, inaccuracies, and anonymity issues.
The proposed system leverages surveillance cameras and advanced computer vision techniques to automate the process:
Litter Detection: The system employs the YOLOv4 object detection model to accurately identify various types of litter, such as bottles, bags, and umbrellas, in the surveillance footage.
Object Tracking: An improved version of the DeepSORT algorithm is used to reliably track the movement of detected objects and individuals, even in the presence of occlusion and viewpoint changes.
Face Recognition: The system utilizes a multi-task convolutional neural network (MTCNN) for face detection and the ArcFace model for face recognition. This enables the identification of offenders by matching their faces to a database of identification cards.
The integrated system quickly identifies and responds to littering incidents, automating the penalization of litterbugs. This approach reduces the need for manual intervention, minimizes human error, and provides prompt identification of offenders, offering significant advantages in addressing the littering problem.
Vers une autre langue
à partir du contenu source
arxiv.org
Questions plus approfondies