The paper proposes a novel deep metric learning (DML) framework called FlameFinder to accurately detect RGB-obscured flames using thermal images from firefighter drones during wildfire monitoring. The key insights are:
Existing technologies struggle to accurately detect flames obscured by smoke, as thermal cameras lack absolute thermal reference points and detect many non-flame hot spots as false positives.
To address this, the proposed model utilizes paired thermal-RGB images captured onboard drones for training. It learns latent flame features from smoke-free samples and identifies flames in smoky thermal patches based on their equivalent thermal-domain distribution.
The framework incorporates three loss functions in the DML framework - triplet loss, cosine loss, and center loss - to learn an optimal embedding function in the latent space.
To overcome the dominance of center loss, an attention mechanism is proposed to balance feature contributions across the three DML loss gradients, enhancing class discrimination in the latent feature space.
Evaluation on FLAME2 and FLAME3 datasets shows the method's effectiveness in diverse fire and no-fire scenarios, outperforming baseline models by 4.4% and 7% in unobscured flame detection accuracy respectively, while also demonstrating enhanced class separation in obscured scenarios.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Hossein Rajo... at arxiv.org 04-11-2024
https://arxiv.org/pdf/2404.06653.pdfDeeper Inquiries