Reinforcement Learning for Low-Thrust Trajectory Design of Interplanetary Missions

From MaRDI portal
Publication:6347374

arXiv2008.08501MaRDI QIDQ6347374

Author name not available (Why is that?)

Publication date: 19 August 2020

Abstract: This paper investigates the use of Reinforcement Learning for the robust design of low-thrust interplanetary trajectories in presence of severe disturbances, modeled alternatively as Gaussian additive process noise, observation noise, control actuation errors on thrust magnitude and direction, and possibly multiple missed thrust events. The optimal control problem is recast as a time-discrete Markov Decision Process to comply with the standard formulation of reinforcement learning. An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted to carry out the training process of a deep neural network, used to map the spacecraft (observed) states to the optimal control policy. The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law. Numerical results are presented for a typical Earth-Mars mission. First, in order to validate the proposed approach, the solution found in a (deterministic) unperturbed scenario is compared with the optimal one provided by an indirect technique. Then, the robustness and optimality of the obtained closed-loop guidance laws is assessed by means of Monte Carlo campaigns performed in the considered uncertain scenarios. These preliminary results open up new horizons for the use of reinforcement learning in the robust design of interplanetary missions.




Has companion code repository: https://github.com/LorenzoFederici/RobustTrajectoryDesignbyRL








This page was built for publication: Reinforcement Learning for Low-Thrust Trajectory Design of Interplanetary Missions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6347374)