Stochastic gradient descent: where optimization meets machine learning
From MaRDI portal
Publication:6200207
DOI10.4171/ICM2022/171OpenAlexW4389775782MaRDI QIDQ6200207
Publication date: 22 March 2024
Published in: International Congress of Mathematicians (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.4171/icm2022/171
Learning and adaptive systems in artificial intelligence (68T05) Optimization problems in solid mechanics (74P99) Stochastic and other probabilistic methods applied to problems in solid mechanics (74S60)
Cites Work
- Unnamed Item
- Unnamed Item
- A randomized Kaczmarz algorithm with exponential convergence
- Error bounds for the method of alternating projections
- The rate of convergence for the method of alternating projections. II
- Robust Stochastic Approximation Approach to Stochastic Programming
- Optimization Methods for Large-Scale Machine Learning
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Theory of Reproducing Kernels
- A Stochastic Approximation Method
- Approximation Methods which Converge with Probability one
- Neural tangent kernel: convergence and generalization in neural networks (invited paper)
This page was built for publication: Stochastic gradient descent: where optimization meets machine learning