Core Concepts
MeciFace is an innovative wearable system that uses a fusion of mechanomyography and inertial sensing to provide real-time, on-the-edge recognition of facial expressions and eating/drinking activities, enabling efficient monitoring of stress-related eating behaviors and promoting healthy lifestyles.
Abstract
The MeciFace system is a state-of-the-art wearable solution for real-time recognition of facial expressions and eating/drinking activities. It employs a hierarchical multimodal fusion approach that leverages mechanomyography (MMG) and inertial sensors strategically placed on a glasses-based platform.
The key highlights of the MeciFace system are:
Real-time, on-the-edge processing: The system performs data acquisition, signal processing, and inference entirely on the embedded hardware, minimizing reliance on external devices and ensuring privacy and low power consumption.
Lightweight neural network models: The system utilizes compact convolutional neural network models with a tiny memory footprint (11-19KB) to enable efficient deployment on microcontrollers.
Hierarchical multimodal fusion: The first stage uses an MMG-based model to detect motion and distinguish between null (e.g., walking, talking) and active (facial expressions or eating/drinking) classes. The second stage then employs an inertial-based model to classify the specific facial expressions or eating/drinking activities.
Robust performance: The MeciFace system achieves an F1-score of ≥86% for facial expression recognition and 94% for eating/drinking monitoring in real-time, on-the-edge evaluation with unseen users.
Potential for contextual monitoring: The system's modular design allows for the integration of additional sensors, such as a barometer, gas sensor, and microphone, to capture environmental and audio data, enabling a more comprehensive monitoring of stress-triggered eating episodes.
The MeciFace prototype demonstrates the feasibility of a ubiquitous, energy-efficient, and privacy-conscious wearable system for monitoring facial expressions and eating activities, with potential applications in stress management, dietary monitoring, and overall health promotion.
Stats
The system achieves an F1-score of ≥86% for facial expression recognition and 94% for eating/drinking monitoring in real-time, on-the-edge evaluation with unseen users.
Quotes
"MeciFace aims to provide a low-power, privacy-conscious, and highly accurate tool for promoting healthy eating behaviors and stress management."
"The hierarchical multimodal fusion is extended for the case of eating/drinking monitoring. The first stage discriminates between null and eating/drinking categories with an MMG model. The second stage employs an inertial model to classify between eating and drinking."