Deep Neural Networks for Solving Large Linear Systems Arising from High-Dimensional Problems
From MaRDI portal
Publication:6054285
DOI10.1137/22m1488132zbMath1525.65026arXiv2204.00313MaRDI QIDQ6054285
Publication date: 27 September 2023
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2204.00313
neural networkspartial differential equationsprobabilistic Boolean networksRiesz fractional diffusionvery large scale linear systemsoverflow queuing model
Computational methods for sparse matrices (65F50) Artificial neural networks and deep learning (68T07) Queueing theory (aspects of probability theory) (60K25) Iterative numerical methods for linear systems (65F10) Numerical solution of discretized equations for boundary value problems involving PDEs (65N22)
Cites Work
- Unnamed Item
- Preconditioned iterative methods for fractional diffusion equation
- On computation of the steady-state probability distribution of probabilistic Boolean networks with gene perturbation
- Spectral analysis and structure preserving preconditioners for fractional diffusion equations
- Weak adversarial networks for high-dimensional partial differential equations
- Iterative methods for overflow queueing models. I
- Iterative methods for overflow queuing models. II
- On the condition numbers of large semi-definite Toeplitz matrices
- Spectral analysis and multigrid preconditioners for two-dimensional space-fractional diffusion equations
- Multilayer feedforward networks are universal approximators
- A new greedy Kaczmarz algorithm for the solution of very large linear systems
- Approximation rates for neural networks with general activation functions
- Exponential convergence of the deep neural network approximation for analytic functions
- Motivations and realizations of Krylov subspace methods for large sparse linear systems
- Representation formulas and pointwise properties for Barron functions
- The Barron space and the flow-induced function spaces for neural network models
- A greedy block Kaczmarz algorithm for solving large-scale linear systems
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Nonlinear approximation via compositions
- Multigrid preconditioners for anisotropic space-fractional diffusion equations
- A priori estimates of the population risk for two-layer neural networks
- Error bounds for approximations with deep ReLU networks
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Krylov Subspace Methods for Linear Systems with Tensor Product Structure
- Universal approximation bounds for superpositions of a sigmoidal function
- Approximation by Combinations of ReLU and Squared ReLU Ridge Functions With <inline-formula> <tex-math notation="LaTeX">$\ell^1$ </tex-math> </inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\ell^0$ </tex-math> </inline-formula> Controls
- Numerical solution to a linear equation with tensor product structure
- Conjugate Gradient Methods for Toeplitz Systems
- Solving high-dimensional partial differential equations using deep learning
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
- New Error Bounds for Deep ReLU Networks Using Sparse Grids
- Deep ReLU Networks Overcome the Curse of Dimensionality for Generalized Bandlimited Functions
- Discovery of Dynamics Using Linear Multistep Methods
- Deep Network Approximation for Smooth Functions
- Deep Network Approximation Characterized by Number of Neurons
- Learning Sparse Polynomial Functions
- A projection method to solve linear systems in tensor format
- Approximation by superpositions of a sigmoidal function
- Neural network approximation: three hidden layers are enough
- Neural network approximation and estimation of classifiers with classification boundary in a Barron class