Operator learning using random features: a tool for scientific computing
DOI10.1137/24m1648703zbMATH Open1545.68106MaRDI QIDQ6585281
A. M. Stuart, Nicholas H. Nelsen
Publication date: 9 August 2024
Published in: SIAM Review (Search for Journal in Brave)
surrogate modelkernel ridge regressionparametric partial differential equationscientific machine learningoperator learningrandom feature
Ridge regression; shrinkage estimators (Lasso) (62J07) Learning and adaptive systems in artificial intelligence (68T05) PDEs with randomness, stochastic partial differential equations (35R60) Applications of operator theory in numerical analysis (47N40) Numerical approximation of high-dimensional functions; sparse grids (65D40)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Variational training of neural network approximations of solution maps for physical models
- Data driven approximation of parametrized PDEs by reduced basis and neural networks
- Machine learning from a continuous viewpoint. I
- Blow up and regularity for fractal Burgers equation
- A least-squares approximation of partial differential equations with high-dimensional random inputs
- Adaptive finite element methods for elliptic equations with non-smooth coefficients
- Non-intrusive reduced order modeling of nonlinear problems using neural networks
- Hierarchical Bayesian level set inversion
- Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification
- The Deep Ritz Method: a deep learning-based numerical algorithm for solving variational problems
- Functional multi-layer perceptron: A nonlinear tool for functional data analysis
- An `empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations
- Bayesian learning for neural networks
- Estimation and detection of functions from anisotropic Sobolev classes
- Deep neural networks motivated by partial differential equations
- Deep UQ: learning deep neural network surrogate models for high dimensional uncertainty quantification
- DGM: a deep learning algorithm for solving partial differential equations
- A physics-informed operator regression framework for extracting data-driven continuum models
- Numerical solution of the parametric diffusion equation by deep neural networks
- Model reduction and neural networks for parametric PDEs
- Derivative-informed projected neural networks for high-dimensional parametric maps governed by PDEs
- Generalization bounds for sparse random feature expansions
- Do ideas have shape? Idea registration as the continuous limit of artificial neural networks
- Lift \& learn: physics-informed machine learning for large-scale nonlinear dynamical systems
- A theoretical analysis of deep neural networks and parametric PDEs
- The Barron space and the flow-induced function spaces for neural network models
- Data-driven deep learning of partial differential equations in modal space
- Meta-learning pseudo-differential operators with deep neural networks
- Non-intrusive model reduction of large-scale, nonlinear dynamical systems using deep learning
- Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration
- ConvPDE-UQ: convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains
- Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders
- Solving electrical impedance tomography with deep learning
- Kernel-based reconstructions for parametric PDEs
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- A mean-field optimal control formulation of deep learning
- Optimal rates for the regularized least-squares algorithm
- A proposal on machine learning via dynamical systems
- Estimation and detection of functions from weighted tensor product spaces
- Operator inference for non-intrusive model reduction with quadratic manifolds
- Learning elliptic partial differential equations with randomized linear algebra
- On the mathematical foundations of learning
- Operator-valued kernels for learning from functional response data
- Sparse adaptive Taylor approximation algorithms for parametric and stochastic elliptic PDEs
- Inverse problems: a Bayesian perspective
- Neural Networks for Functional Approximation and System Identification
- The Random Feature Model for Input-Output Maps between Banach Spaces
- VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES OF INTEGRABLE FUNCTIONS AND MERCER THEOREM
- Universal approximation bounds for superpositions of a sigmoidal function
- Survey of Multifidelity Methods in Uncertainty Propagation, Inference, and Optimization
- Model Reduction and Approximation
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ
- Optimal weighted least-squares methods
- Data-driven forward discretizations for Bayesian inversion
- Solving parametric PDE problems with artificial neural networks
- Two-Layer Neural Networks with Values in a Banach Space
- Error estimates for DeepONets: a deep learning framework in infinite dimensions
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- Learning data-driven discretizations for partial differential equations
- Solving inverse problems using data-driven models
- Approximation of high-dimensional parametric PDEs
- Reproducing Kernel Hilbert Spaces for Parametric Partial Differential Equations
- Fourth-Order Time-Stepping for Stiff PDEs
- On the Equivalence between Kernel Quadrature Rules and Random Feature Expansions
- Continuous analogues of matrix factorizations
- A Data-Driven Stochastic Method for Elliptic PDEs with Random Coefficients
- An Extension of Chebfun to Two Dimensions
- NONPARAMETRIC REGRESSION ON FUNCTIONAL DATA: INFERENCE AND PRACTICAL ASPECTS
- Algorithms for Numerical Analysis in High Dimensions
- On Learning Vector-Valued Functions
- Theory of Reproducing Kernels
- Optimal experimental design for infinite-dimensional Bayesian inverse problems governed by PDEs: a review
- MCMC methods for functions: modifying old algorithms to make them faster
- Neural-network-augmented projection-based model order reduction for mitigating the Kolmogorov barrier to reducibility
- Approximation bounds for random neural networks and reservoir systems
- Convergence Rates for Learning Linear Operators from Noisy Data
- Optimal approximation of infinite-dimensional holomorphic functions
- Sparse Recovery of Elliptic Solvers from Matrix-Vector Products
- Kernel methods are competitive for operator learning
- Approximation bounds for convolutional neural networks in operator learning
- Error estimates for POD-DL-ROMs: a deep learning framework for reduced order modeling of nonlinear parametrized PDEs enhanced by proper orthogonal decomposition
- The Elements of Statistical Learning
Related Items (1)
This page was built for publication: Operator learning using random features: a tool for scientific computing
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6585281)