Sign In

REWIND Dataset: Privacy-preserving Speaking Status Segmentation from Multimodal Body Movement Signals in the Wild

Core Concepts
Recognizing speaking in humans using machine learning models trained on video and wearable sensor data.
Introduces the REWIND dataset for speaking status segmentation. Challenges of obtaining individual voice recordings in mingling scenarios. Baselines for no-audio speaking status segmentation from video, body acceleration, and body pose tracks. Importance of high-quality audio recordings for cross-modality studies. Implications for social signal processing and computational social science.
"High-quality speaking status signals have been obtained from personal head-mounted and directional microphones in seated meetings." "Acceleration readings were obtained from wearable devices in a badge-like form factor worn by data subjects on the chest." "Video recordings include top-down and side-elevated views."
"Recognizing speaking in humans is a central task towards understanding social interactions." "Machine learning models trained on video and wearable sensor data make it possible to recognize speech by detecting its related gestures in an unobtrusive, privacy-preserving way."

Key Insights Distilled From

by Jose Vargas ... at 03-05-2024
REWIND Dataset

Deeper Inquiries

질문 1

REWIND 데이터셋은 말하는 상태 이외의 다른 사회적 신호를 연구하는 데 어떻게 활용될 수 있습니까? Answer 1 here

질문 2

혼잡한 환경에서 말하는 상태 감지에 대한 자세한 분석의 한계는 무엇입니까? Answer 2 here

질문 3

REWIND 데이터셋에서 고품질 오디오 녹음이 주석의 신뢰성과 모델 성능에 미치는 영향은 무엇입니까? Answer 3 here