인간-로봇 팀에서 공동 움직임(proxemics와 kinesics)은 신뢰 발전과 관련이 있으며, 이는 신뢰 손상의 조기 경보 신호로 사용될 수 있다.
Personalization in human-robot interaction can be achieved efficiently by learning a latent action space that maximizes the mutual information between the pre-trained robot policy and the user preference-aligned domain, without significantly compromising the original task performance.
ロボットの共感能力が高いほど、利用者はそのロボットに対して心的状態を帰属する傾向がある。
People imagine hidden "phantom costs" when robots make overly generous offers, leading them to be less likely to accept the offers.
A novel deep learning framework, URGR, enables robust recognition of human gestures from distances up to 25 meters using only a simple RGB camera. The framework combines a super-resolution model, HQ-Net, and a hybrid classifier, GViT, to overcome the challenges of low-resolution and blurry images at long distances.
Introducing two methods, surround dense sampling and Online Temporally Aware Label Cleaning (O-TALC), to improve the performance of online temporal action segmentation by addressing the issues of inaccurate segment boundaries and oversegmentation.
The auditory detectability of a wheeled robot and a quadruped robot varies significantly, with the quadruped robot being detected at much larger distances, even in high background noise. This has important implications for the design of human-centered robot navigation algorithms.
The core message of this article is to propose an adapted Temporal Graph Networks (TGN) model that can comprehensively represent social interaction dynamics by incorporating temporal multi-modal behavioral data, including gaze interaction, voice activity, and environmental context. This representation enables practical implementation and outperforms baseline models for tasks like next gaze prediction and next speaker prediction, which are crucial for effective human-robot collaboration.
Our framework ECHO learns a shared representation space between humans and robots to generate socially compliant robot behaviors by forecasting human motions in interactive social scenarios.
A novel tele-immersive framework that promotes cognitive and physical collaboration between humans and drones through Mixed Reality, incorporating bi-directional spatial awareness and multi-modal virtual-physical interaction approaches.