Convergence of online mirror descent
From MaRDI portal
Publication:2278461
DOI10.1016/j.acha.2018.05.005zbMath1494.68219arXiv1802.06357OpenAlexW2803423166WikidataQ129764098 ScholiaQ129764098MaRDI QIDQ2278461
Publication date: 5 December 2019
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1802.06357
Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Stochastic programming (90C15) Stochastic approximation (62L20) Online algorithms; streaming algorithms (68W27)
Related Items
Block coordinate type methods for optimization and learning ⋮ Federated learning for minimizing nonsmooth convex loss functions ⋮ Distributed learning and distribution regression of coefficient regularization ⋮ Analogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming Problems ⋮ Analysis of singular value thresholding algorithm for matrix completion ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unregularized online learning algorithms with general loss functions
- Multi-kernel regularized classifiers
- A randomized Kaczmarz algorithm with exponential convergence
- Online gradient descent learning algorithms
- Sharp uniform convexity and smoothness inequalities for trace norms
- Mirror descent and nonlinear projected subgradient methods for convex optimization.
- Almost sure convergence of the Kaczmarz algorithm with random measurements
- Online regularized learning with pairwise loss functions
- AIR tools -- a MATLAB package of algebraic iterative reconstruction methods
- Online learning algorithms
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Linearized Bregman iterations for compressed sensing
- Online Regularized Classification Algorithms
- Robust Stochastic Approximation Approach to Stochastic Programming
- Optimization Methods for Large-Scale Machine Learning
- Regularization schemes for minimum error entropy principle
- Thresholded spectral algorithms for sparse approximations
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- On Complexity Issues of Online Learning Algorithms
- Learning theory of distributed spectral algorithms
- Analysis of Online Composite Mirror Descent Algorithm
- Robust Estimation of a Location Parameter
- A Stochastic Approximation Method