Feedforward Neural Networks and Compositional Functions with Applications to Dynamical Systems
From MaRDI portal
Publication:5065061
DOI10.1137/21M1391596OpenAlexW4221117674MaRDI QIDQ5065061
Publication date: 18 March 2022
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/21m1391596
Dynamical systems and ergodic theory (37-XX) Computer science (68-XX) Calculus of variations and optimal control; optimization (49-XX)
Related Items (3)
Feature-informed data assimilation ⋮ Improved Analysis of PINNs: Alleviate the CoD for Compositional Solutions ⋮ Approximation of compositional functions with ReLU neural networks
Cites Work
- Unnamed Item
- Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures
- Learning dynamical systems from data: a simple cross-validation perspective. I: Parametric kernel flows
- Algorithms of data generation for deep learning and feedback design: a survey
- Function approximation by deep networks
- Approximation of Lyapunov functions from noisy data
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Deep vs. shallow networks: An approximation theory perspective
- Solving Ordinary Differential Equations I
- Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming
- Universal approximation bounds for superpositions of a sigmoidal function
- Solving high-dimensional partial differential equations using deep learning
- Adaptive Deep Learning for High-Dimensional Hamilton--Jacobi--Bellman Equations
- Algorithms for solving high dimensional PDEs: from nonlinear Monte Carlo to machine learning
- Approximation by superpositions of a sigmoidal function
This page was built for publication: Feedforward Neural Networks and Compositional Functions with Applications to Dynamical Systems