The author presents a novel humanoid robot, Adam, and an imitation learning framework that enables human-like locomotion. By utilizing human motion data, the framework overcomes challenges in reward function complexity and training strategies.
인간형 로봇을 위한 자연스러운 이동 및 전환 학습
Humanoid-Gym is an open-source reinforcement learning framework designed to train locomotion skills for humanoid robots, enabling zero-shot transfer from simulation to the real-world environment.
This research proposes a novel three-layered architecture for achieving stylistic, human-like walking in humanoid robots, combining a Deep Neural Network (DNN) for trajectory generation with a Model Predictive Controller (MPC) for online step adjustment and dynamic feasibility.
This research demonstrates that by incorporating simple distance-based reward terms into a reinforcement learning framework, humanoid robots can be trained to navigate obstacle-ridden environments while performing bipedal locomotion.
Lipschitz-Constrained Policies (LCP), a novel method using a differentiable gradient penalty to enforce smooth action outputs, offers a simple and effective alternative to traditional smoothing techniques for training robust locomotion controllers in humanoid robots, enabling successful sim-to-real transfer.