The researchers developed a low-cost, open-source multisensor system for automated insect monitoring and classification. The key component is the imaging unit, which has been optimized to capture high-quality images of insects in motion. The system uses diffuse illumination, short flash durations, and a custom-designed camera setup to minimize motion blur and capture detailed morphological features needed for species-level identification.
The researchers evaluated the imaging system's performance on a dataset of 1,154 images across 16 insect species, spanning different orders, families, and genera. They tested three deep learning models - ResNet-50, MobileNet, and a custom CNN - on both full-frame and cropped insect images.
The results show that the ResNet-50 model, pre-trained on the iNaturalist dataset, achieved over 96% top-1 accuracy on the test set, even with the full-frame images. However, the smaller MobileNet and custom CNN models performed significantly better when trained on the cropped insect images, reaching up to 97.8% accuracy. This highlights the importance of capturing high-resolution, detailed insect features for robust species-level classification, especially for rare or visually similar species.
The researchers also developed a semantic segmentation model using U-Net to automatically detect and crop the insects in the images, further improving the classification performance. The complete system is designed to be low-cost, scalable, and adaptable to various trap types, making it suitable for large-scale insect biodiversity monitoring by citizen scientists.
Ke Bahasa Lain
dari konten sumber
arxiv.org
Pertanyaan yang Lebih Dalam