On the regularizing property of stochastic gradient descent
From MaRDI portal
Publication:4646419
DOI10.1088/1361-6420/aaea2azbMath1490.65070arXiv1805.10470OpenAlexW3098767358MaRDI QIDQ4646419
Publication date: 14 January 2019
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1805.10470
Ill-posedness and regularization problems in numerical linear algebra (65F22) Numerical mathematical programming methods (65K05) Applications of mathematical programming (90C90)
Related Items
Two-Layer Neural Networks with Values in a Banach Space ⋮ Stochastic asymptotical regularization for linear inverse problems ⋮ Stochastic gradient descent for linear inverse problems in Hilbert spaces ⋮ Stochastic mirror descent method for linear ill-posed problems in Banach spaces ⋮ Stochastic linear regularization methods: random discrepancy principle and applications ⋮ On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems ⋮ On the Convergence of Stochastic Gradient Descent for Linear Inverse Problems in Banach Spaces ⋮ On the discrepancy principle for stochastic gradient descent ⋮ Convergence analyses based on frequency decomposition for the randomized row iterative method ⋮ On the regularization effect of stochastic gradient descent applied to least-squares ⋮ Randomized Kaczmarz Converges Along Small Singular Vectors ⋮ An analysis of stochastic variance reduced gradient for linear inverse problems *
Cites Work
- Unnamed Item
- Nonparametric stochastic approximation with large step-sizes
- Iterative regularization methods for nonlinear ill-posed problems
- A randomized Kaczmarz algorithm with exponential convergence
- Online gradient descent learning algorithms
- The Mathematics of Computerized Tomography
- Inverse Problems
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Iteratively Regularized Gauss–Newton Method for Nonlinear Inverse Problems with Random Noise
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Online learning in optical tomography: a stochastic approach
- Preasymptotic convergence of randomized Kaczmarz method
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Optimization Methods for Large-Scale Machine Learning
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Heuristic Parameter-Choice Rules for Convex Variational Regularization Based on Error Estimates
- On the Adaptive Selection of the Parameter in Regularization of Ill-Posed Problems
- Consistency and rates of convergence of nonlinear Tikhonov regularization with random noise
- A Stochastic Approximation Method
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm