Improved architectures and training algorithms for deep operator networks
From MaRDI portal
Publication:2149522
DOI10.1007/s10915-022-01881-0OpenAlexW3202809124WikidataQ114225558 ScholiaQ114225558MaRDI QIDQ2149522
Sifan Wang, Hanwen Wang, Paris Perdikaris
Publication date: 29 June 2022
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2110.01654
Artificial intelligence (68Txx) Numerical methods for ordinary differential equations (65Lxx) Numerical methods for partial differential equations, initial value and time-dependent initial-boundary value problems (65Mxx)
Related Items (9)
Multifidelity deep operator networks for data-driven and physics-informed problems ⋮ Deep learning methods for partial differential equations and related parameter identification problems ⋮ SVD perspectives for augmenting DeepONet flexibility and interpretability ⋮ Novel DeepONet architecture to predict stresses in elastoplastic structures with variable complex geometries and loads ⋮ A multifidelity deep operator network approach to closure for multiscale systems ⋮ NeuralUQ: A Comprehensive Library for Uncertainty Quantification in Neural Differential Equations and Operators ⋮ En-DeepONet: an enrichment approach for enhancing the expressivity of neural operators with applications to seismology ⋮ Kernel methods are competitive for operator learning ⋮ ImprovedDeepONets
Uses Software
Cites Work
- Exponential time differencing for stiff systems
- Deep learning of free boundary and Stefan problems
- DeepM\&Mnet: inferring the electroconvection multiphysics fields based on operator approximation by neural networks
- When and why PINNs fail to train: a neural tangent kernel perspective
- On the eigenvector bias of Fourier feature networks: from regression to solving multi-scale PDEs with physics-informed neural networks
- A First Course in the Numerical Analysis of Differential Equations
- Understanding and Mitigating Gradient Flow Pathologies in Physics-Informed Neural Networks
- Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Frequency Principle: Fourier Analysis Sheds Light on Deep Neural Networks
- On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs
- Wide neural networks of any depth evolve as linear models under gradient descent *
- Unnamed Item
- Unnamed Item
- Unnamed Item
This page was built for publication: Improved architectures and training algorithms for deep operator networks