Core Concepts
The author presents a hierarchical decentralized approach using reinforcement learning to control maneuverable tether-net systems for capturing space debris effectively.
Abstract
The content discusses the use of robotic tether-net systems to actively remove large space debris. It introduces a decentralized implementation of trajectory planning and control using reinforcement learning. The paper demonstrates the effectiveness of this approach in capturing debris with reduced fuel costs compared to traditional methods. By employing maneuverable units (MUs) guided by PID controllers informed by noisy sensor feedback, the system ensures successful capture while optimizing fuel consumption.
The study focuses on two different tether-net systems, one with 4 MUs and another with 8 MUs, showcasing the benefits of maneuverable nets in enhancing flexibility and reliability during debris capture. Reinforcement learning is utilized to train policies that determine optimal aiming points for MUs based on the relative location of the target debris. Simulation-based experiments validate the success of this approach in capturing debris at lower fuel costs than conventional methods.
The paper also introduces a surrogate model based on recurrent neural networks (RNN) to predict capture quality metrics, speeding up the RL process. Results show that the RL-guided systems achieve 100% capture success rate over unseen test scenarios while reducing total fuel consumption significantly compared to nominal baselines.
Stats
"Simulation-based experiments show that this approach allows the successful capture of debris at fuel costs that are notably lower than nominal baselines."
"Each MU then seeks to follow its assigned trajectory by using a decentralized PID controller that outputs the MU’s thrust vector and is informed by noisy sensor feedback (for realism) of its relative location."
"Performance of the resulting tether-net maneuver process is compared to nominal cases in Sec. IV."