Langevin algorithms for Markovian Neural Networks and Deep Stochastic control
From MaRDI portal
Publication:6421453
arXiv2212.12018MaRDI QIDQ6421453
Author name not available (Why is that?)
Publication date: 22 December 2022
Abstract: Stochastic Gradient Descent Langevin Dynamics (SGLD) algorithms, which add noise to the classic gradient descent, are known to improve the training of neural networks in some cases where the neural network is very deep. In this paper we study the possibilities of training acceleration for the numerical resolution of stochastic control problems through gradient descent, where the control is parametrized by a neural network. If the control is applied at many discretization times then solving the stochastic control problem reduces to minimizing the loss of a very deep neural network. We numerically show that Langevin algorithms improve the training on various stochastic control problems like hedging and resource management, and for different choices of gradient descent methods.
Has companion code repository: https://github.com/bras-p/langevin-for-stochastic-control
This page was built for publication: Langevin algorithms for Markovian Neural Networks and Deep Stochastic control
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6421453)