An empirical study compared 4 annotation methods for in-the-wild data, showing in situ methods produce more precise labels than recall methods. Combining an activity diary with a visualization tool improved annotation consistency and deep learning model F1-Score by up to 8%. Sensor-based activity recognition requires multimodal inputs for complex activities. Long-term data recording is challenging due to contextual differences between controlled and uncontrolled environments. Annotation biases like self-recall bias can affect classifier performance. Deep learning analysis showed the impact of annotation quality on classification results.
To Another Language
from source content
arxiv.org
Key Insights Distilled From
by Alexander Ho... at arxiv.org 03-18-2024
https://arxiv.org/pdf/2305.08752.pdfDeeper Inquiries