Approximation capabilities of measure-preserving neural networks
From MaRDI portal
Publication:6072433
DOI10.1016/j.neunet.2021.12.007arXiv2106.10911OpenAlexW3174926092MaRDI QIDQ6072433
Aiqing Zhu, Pengzhan Jin, Yi-Fa Tang
Publication date: 13 October 2023
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2106.10911
Artificial intelligence (68Txx) Dynamical systems and ergodic theory (37-XX) Approximations and expansions (41-XX)
Cites Work
- Unnamed Item
- \(L^p\) approximation of maps by diffeomorphisms
- Volume-preserving algorithms for source-free dynamical systems
- Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness
- Sympnets: intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems
- A mean-field optimal control formulation of deep learning
- A proposal on machine learning via dynamical systems
- Solving Ordinary Differential Equations I
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Polar factorization and monotone rearrangement of vector‐valued functions
- Polynomial approximations of symplectic dynamics and richness of chaos in non-hyperbolic area-preserving maps
- DeepXDE: A Deep Learning Library for Solving Differential Equations
- Geometric Numerical Integration
- Approximation by superpositions of a sigmoidal function
- Neural network approximation: three hidden layers are enough
This page was built for publication: Approximation capabilities of measure-preserving neural networks