Core Concepts
This article presents an architecture that employs computer vision, machine learning, and artificial intelligence algorithms to enable a mobile robot to identify and guide users in a social navigation context, providing an intuitive and user-friendly experience.
Abstract
The article presents an architecture for user identification and social navigation with a mobile robot. The key highlights and insights are:
The architecture consists of three nodes: the manager node, the realsense_sub node, and the cmd node.
The manager node is responsible for ensuring the correct execution order of the architecture, including gesture recognition, facial recognition, and distance monitoring.
The realsense_sub node is responsible for skeleton recognition of the user to ensure that the user is indeed following the robot, using the identified person's face ID.
The cmd node commands the robot's velocity while monitoring the distance between the robot and the user, stopping the robot if the distance exceeds a desired threshold.
The experimental validation demonstrates the system's ability to guide a user to a specific destination while continuously tracking and recording the real-time distance between the robot and the user, using Exponential Moving Average (EMA) to improve the data acquired by the RealSense camera.
The authors mention that further development includes integrating algorithms for autonomous robot movement, collision avoidance, and environment mapping.
Stats
The desired distance between the robot and the user is set at 2000 mm (2 meters).
The robot stops when the detected distance from the user exceeds the desired distance.