On the training and generalization of deep operator networks
From MaRDI portal
Publication:6573171
DOI10.1137/23M1598751zbMATH Open1543.6833MaRDI QIDQ6573171
Publication date: 16 July 2024
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Artificial neural networks and deep learning (68T07) Numerical optimization and variational techniques (65K10)
Cites Work
- On the stability and accuracy of least squares approximations
- Correction to: ``On the stability and accuracy of least squares approximations
- A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data
- Improved architectures and training algorithms for deep operator networks
- Gradient descent optimizes over-parameterized deep ReLU networks
- The deal.II library, version 9.2
- SVD perspectives for augmenting DeepONet flexibility and interpretability
- Spectral Methods for Time-Dependent Problems
- MIONet: Learning Multiple-Input Operators via Tensor Product
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Active Neuron Least Squares: A Training Method for Multivariate Rectified Neural Networks
- Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance
- On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs
- Exponential Convergence of Deep Operator Networks for Elliptic Partial Differential Equations
- Convergence rate of DeepONets for learning operators arising from advection-diffusion equations
This page was built for publication: On the training and generalization of deep operator networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6573171)