Core Concepts
Improving landmark detection accuracy through normalized validity scores.
Abstract
The content discusses the importance of landmark detection in various applications like head pose estimation, emotion estimation, and face recognition. It introduces a novel approach to improve accuracy by normalizing the inaccuracy of detected landmarks. The proposed method includes a margin approach to handle negligible errors close to ground truth. Evaluation results show significant improvements in accuracy and outlier detection using the new formulation.
Stats
LossInaccuracy = ((GroupSize X i=0 |GTi − ESi|) − ESInaccuracy)2 (Equation 1)
Margin parameter set to 0.005 after evaluation on training set (Table 1)
Results for different margins and models averaged over 5 runs provided (Table 1)
Dataset from TEyEDS used for training with subset selection due to hardware limitations (Section 4.1)
Training parameters include batch size of 10, initial learning rate of 10^-4, Adam optimizer, and data augmentation techniques (Section 4.2)
Results show performance improvements with the proposed extension compared to the original formulation (Table 2)
Outlier detection results demonstrate improved performance across all models with meaningful inaccuracy signals (Table 3)
Quotes
"We propose an improvement to the landmark validity loss."
"One part of this process is the accurate and fine-grained detection of shape."
"The neural network estimates its own failure along with landmark inaccuracies."
"Our contributions include an extended equation for joint landmark inaccuracy loss."
"Results show significant improvement in model performance with the proposed extension."