Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation
From MaRDI portal
Publication:6572185
DOI10.1016/j.jcp.2024.113168MaRDI QIDQ6572185
Xavier Tricoche, Jose Antonio Lara Benitez, Florian Faucher, Takashi Furuya, Anastasis Kratsios, Maarten V. de Hoop
Publication date: 15 July 2024
Published in: Journal of Computational Physics (Search for Journal in Brave)
forward operatorgeneralization error boundsneural operatorout-of-distributional risk boundstransformer-inspired
Artificial intelligence (68Txx) Numerical methods for partial differential equations, boundary value problems (65Nxx) Stochastic processes (60Gxx)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability in Banach spaces. Isoperimetry and processes
- On statistical Calderón problems
- Iterative regularization methods for nonlinear ill-posed problems
- Advances in iterative methods and preconditioners for the Helmholtz equation
- Entropy numbers, s-numbers, and eigenvalue problems
- A strong convergence theorem for Banach space valued random variables
- Metric entropy and the small ball problem for Gaussian measures
- Sharper bounds for Gaussian and empirical processes
- Approximation, metric entropy and small ball estimates for Gaussian measures
- Non-intrusive reduced order modeling of nonlinear problems using neural networks
- Adjoint-state method for hybridizable discontinuous Galerkin discretization, application to the inverse acoustic wave problem
- Model reduction and neural networks for parametric PDEs
- Deep learning architectures for nonlinear operator functions and nonlinear inverse problems
- A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data
- Learning deep implicit Fourier neural operators (IFNOs) with applications to heterogeneous material modeling
- Regularity and convergence analysis in Sobolev and Hölder spaces for generalized Whittle-Matérn fields
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model
- Optimal transport for applied mathematicians. Calculus of variations, PDEs, and modeling
- On the mean speed of convergence of empirical and occupation measures in Wasserstein distance
- Wavelet neural operator for solving parametric partial differential equations in computational mechanics problems
- Conditional Convergence of Infinite Products
- Inverse Modeling
- A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
- Inverse Boundary Value Problem For The Helmholtz Equation: Quantitative Conditional Lipschitz Stability Estimates
- Ensemble Kalman methods for inverse problems
- Inverse problems: A Bayesian perspective
- Why it is Difficult to Solve Helmholtz Problems with Classical Iterative Methods
- An Introduction to Computational Stochastic PDEs
- Generalization Error in Deep Learning
- Absorbing boundary conditions for numerical simulation of waves
- Fast and Exact Simulation of Stationary Gaussian Processes through Circulant Embedding of the Covariance Matrix
- Parameterizations for ensemble Kalman inversion
- A Class of Iterative Solvers for the Helmholtz Equation: Factorizations, Sweeping Preconditioners, Source Transfer, Single Layer Potentials, Polarized Traces, and Optimized Schwarz Methods
- High-Dimensional Statistics
- Mathematical Foundations of Infinite-Dimensional Statistical Models
- Time-Domain Scattering
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Numerical solution of fractional elliptic stochastic PDEs with spatial white noise
- Stochastic Equations in Infinite Dimensions
- Understanding Machine Learning
- Fundamentals of Nonparametric Bayesian Inference
- Deep learning: a statistical viewpoint
- Optimal Transport
- Small deviations for some multi-parameter Gaussian processes
- Adaptive metric dimensionality reduction
- Exponential Convergence of Deep Operator Networks for Elliptic Partial Differential Equations
- Convergence Rates for Learning Linear Operators from Noisy Data
- Designing universal causal deep learning models: The geometric (Hyper)transformer
- Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm
- On polynomial-time computation of high-dimensional posterior measures by Langevin-type algorithms
- Convergence rate of DeepONets for learning operators arising from advection-diffusion equations
This page was built for publication: Out-of-distributional risk bounds for neural operators with applications to the Helmholtz equation