Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient
From MaRDI portal
Publication:6168293
DOI10.1080/02331888.2023.2213371arXiv2107.12058OpenAlexW4287067670MaRDI QIDQ6168293
Publication date: 10 July 2023
Published in: Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2107.12058
stochastic optimizationonline learningaveragingstochastic gradient algorithmnon-asymptotic convergence
Cites Work
- Unnamed Item
- Efficient and fast estimation of the geometric median in Hilbert spaces with an averaged stochastic gradient algorithm
- Estimating the geometric median in Hilbert spaces with stochastic gradient algorithms: \(L^p\) and almost sure rates of convergence
- Multivariate location estimation using extension of \(R\)-estimates through \(U\)-statistics type approach
- On the almost sure asymptotic behaviour of stochastic algorithm
- Online estimation of the geometric median in Hilbert spaces: nonasymptotic confidence balls
- Non asymptotic controls on a recursive superquantile approximation
- Online estimation of the asymptotic variance for averaged stochastic gradient algorithms
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- A Generalization of the Averaging Procedure: The Use of Two-Time-Scale Algorithms
- Acceleration of Stochastic Approximation by Averaging
- Asymptotic Almost Sure Efficiency of Averaged Stochastic Algorithms
- On Projected Stochastic Gradient Descent Algorithm with Weighted Averaging for Least Squares Regression
- A Stochastic Approximation Method
- Lp and almost sure rates of convergence of averaged stochastic gradient algorithms: locally strongly convex objective
This page was built for publication: Convergence in quadratic mean of averaged stochastic gradient algorithms without strong convexity nor bounded gradient