The Random Feature Model for Input-Output Maps between Banach Spaces
From MaRDI portal
Publication:3382802
DOI10.1137/20M133957XMaRDI QIDQ3382802
Nicholas H. Nelsen, Andrew M. Stuart
Publication date: 22 September 2021
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2005.10224
emulatorsupervised learningmodel reductionhigh-dimensional approximationsurrogate modelsolution mapdata-driven computingparametric PDErandom feature
PDEs with randomness, stochastic partial differential equations (35R60) Algorithms for approximation of functions (65D15) Neural nets and related approaches to inference from stochastic processes (62M45) Numerical approximation of high-dimensional functions; sparse grids (65D40)
Related Items
MIONet: Learning Multiple-Input Operators via Tensor Product ⋮ Two-Layer Neural Networks with Values in a Banach Space ⋮ Reduced Operator Inference for Nonlinear Partial Differential Equations ⋮ Iterated Kalman methodology for inverse problems ⋮ Variational regularization in inverse problems and machine learning ⋮ A framework for machine learning of model error in dynamical systems ⋮ Learning high-dimensional parametric maps via reduced basis adaptive residual networks ⋮ Convergence Rates for Learning Linear Operators from Noisy Data ⋮ Transferable neural networks for partial differential equations ⋮ Sparse Recovery of Elliptic Solvers from Matrix-Vector Products ⋮ Large-scale Bayesian optimal experimental design with derivative-informed projected neural network ⋮ SPADE4: sparsity and delay embedding based forecasting of epidemics ⋮ Data-driven forward and inverse problems for chaotic and hyperchaotic dynamic systems based on two machine learning architectures ⋮ Local approximation of operators ⋮ Energy-dissipative evolutionary deep operator neural networks ⋮ Fast macroscopic forcing method ⋮ Optimal Dirichlet boundary control by Fourier neural operators applied to nonlinear optics ⋮ Derivative-informed neural operator: an efficient framework for high-dimensional parametric derivative learning ⋮ Learning phase field mean curvature flows with neural networks ⋮ Do ideas have shape? Idea registration as the continuous limit of artificial neural networks
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Variational training of neural network approximations of solution maps for physical models
- Data driven approximation of parametrized PDEs by reduced basis and neural networks
- Machine learning from a continuous viewpoint. I
- Blow up and regularity for fractal Burgers equation
- A least-squares approximation of partial differential equations with high-dimensional random inputs
- Spatial variation. 2nd ed
- Adaptive finite element methods for elliptic equations with non-smooth coefficients
- Elliptic partial differential equations of second order
- Non-intrusive reduced order modeling of nonlinear problems using neural networks
- Hierarchical Bayesian level set inversion
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Functional multi-layer perceptron: A nonlinear tool for functional data analysis
- An `empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations
- Bayesian learning for neural networks
- Deep neural networks motivated by partial differential equations
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- DGM: a deep learning algorithm for solving partial differential equations
- A physics-informed operator regression framework for extracting data-driven continuum models
- Numerical solution of the parametric diffusion equation by deep neural networks
- Model reduction and neural networks for parametric PDEs
- Data-driven deep learning of partial differential equations in modal space
- Meta-learning pseudo-differential operators with deep neural networks
- ConvPDE-UQ: convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- Solving electrical impedance tomography with deep learning
- Kernel-based reconstructions for parametric PDEs
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- A mean-field optimal control formulation of deep learning
- Optimal rates for the regularized least-squares algorithm
- A proposal on machine learning via dynamical systems
- On the mathematical foundations of learning
- Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs
- VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM
- Optimization with PDE Constraints
- Universal approximation bounds for superpositions of a sigmoidal function
- Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
- Model Reduction and Approximation
- Stable architectures for deep neural networks
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- Optimal weighted least-squares methods
- Data-driven forward discretizations for Bayesian inversion
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Learning data-driven discretizations for partial differential equations
- Approximation of high-dimensional parametric PDEs
- Reproducing Kernel Hilbert Spaces for Parametric Partial Differential Equations
- Fourth-Order Time-Stepping for Stiff PDEs
- On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
- A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients
- Algorithms for Numerical Analysis in High Dimensions
- On Learning Vector-Valued Functions
- Theory of Reproducing Kernels
- Scattered Data Approximation
- MCMC methods for functions: modifying old algorithms to make them faster