Active Neuron Least Squares: A Training Method for Multivariate Rectified Neural Networks
From MaRDI portal
Publication:5095494
DOI10.1137/21M1460764zbMath1492.65157OpenAlexW4289334731WikidataQ114073942 ScholiaQ114073942MaRDI QIDQ5095494
Publication date: 9 August 2022
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/21m1460764
Artificial neural networks and deep learning (68T07) Numerical mathematical programming methods (65K05) Nonlinear programming (90C30)
Related Items
Uses Software
Cites Work
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- DeepONet
- Efficient implementation of weighted ENO schemes
- Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
- On the eigenvector bias of Fourier feature networks: from regression to solving multi-scale PDEs with physics-informed neural networks
- Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations
- Adaptive restart for accelerated gradient schemes
- Understanding and Mitigating Gradient Flow Pathologies in Physics-Informed Neural Networks
- Solving high-dimensional partial differential equations using deep learning
- Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations
- Plateau Phenomenon in Gradient Descent Training of RELU Networks: Explanation, Quantification, and Avoidance
- Dying ReLU and Initialization: Theory and Numerical Examples
- A Family of Variable-Metric Methods Derived by Variational Means
- A new approach to variable metric algorithms
- The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations
- Conditioning of Quasi-Newton Methods for Function Minimization