The content discusses the use of deep learning techniques, specifically the Segment Anything Model (SAM), for automated segmentation of agarwood resinous areas in cross-section images. The manual extraction method is laborious and prone to human errors, leading to potential waste in agarwood production. The proposed workflow involves capturing a cross-section image, removing the background, and using the SAM model to segment the resinous regions. The segmented image is then converted to a G-code script that can be used by a CNC machine for automated extraction.
The authors present a small dataset of 12 agarwood cross-section images and evaluate the performance of the SAM model using the Intersection over Union (IoU) metric. The results show that the model can achieve near-perfect segmentation (IoU > 97%) for images where the resinous and non-resinous regions have high contrast in color and hue. However, the model struggles with images where the two regions have similar visual characteristics. To improve the segmentation accuracy, the authors suggest automating the prompt creation process to cover the full spectrum of resin compound colors and hues.
The authors also discuss plans to train an image-based classifier to predict the region labels (healthy, resinous, and decayed core) and use a multi-voter scheme to refine the prompts for the SAM model, enabling a fully autonomous pipeline for agarwood resin extraction.
翻译成其他语言
从原文生成
arxiv.org
更深入的查询