The paper presents a new method called DualLQR for efficiently grasping oscillating apples using task parameterized learning from demonstration (LfD). The key challenges in this task are: 1) close tracking of the oscillating target during the final approach for damage-free grasping, and 2) minimizing the overall path length for improved efficiency.
The proposed DualLQR method uses a dual LQR setup, with one LQR running in the reference frame of the initial end-effector pose and another in the reference frame of the oscillating apple. Gaussian Mixture Regression (GMR) is used to extract the mean and covariance at each timestep in each reference frame, which are then used to fit the LQRs. During execution, the control outputs from the two LQRs are combined using a weighted average based on the precision matrices from the GMR.
Extensive simulation experiments were conducted to compare DualLQR against the state-of-the-art InfLQR method. The results show that DualLQR significantly outperforms InfLQR in terms of final approach accuracy, especially for high orientation oscillations, with a 60% improvement. DualLQR also reduces the overall distance travelled by the manipulator.
Further real-world testing on a simplified apple grasping task demonstrated that DualLQR can successfully grasp oscillating apples with a 99% success rate. The optimal control cost setting was found to balance the trade-off between final approach accuracy and distance travelled, resulting in the fastest grasping time.
다른 언어로
소스 콘텐츠 기반
arxiv.org
더 깊은 질문