Scalable Gradients for Stochastic Differential Equations
From MaRDI portal
Publication:6332337
arXiv2001.01328MaRDI QIDQ6332337
Author name not available (Why is that?)
Publication date: 5 January 2020
Abstract: The adjoint sensitivity method scalably computes gradients of solutions to ordinary differential equations. We generalize this method to stochastic differential equations, allowing time-efficient and constant-memory computation of gradients with high-order adaptive solvers. Specifically, we derive a stochastic differential equation whose solution is the gradient, a memory-efficient algorithm for caching noise, and conditions under which numerical solutions converge. In addition, we combine our method with gradient-based stochastic variational inference for latent stochastic differential equations. We use our method to fit stochastic dynamics defined by neural networks, achieving competitive performance on a 50-dimensional motion capture dataset.
Has companion code repository: https://github.com/google-research/torchsde
This page was built for publication: Scalable Gradients for Stochastic Differential Equations
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6332337)