Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
From MaRDI portal
Publication:2986183
DOI10.1109/TIT.2014.2332531zbMath1360.62192OpenAlexW2087789467MaRDI QIDQ2986183
Publication date: 16 May 2017
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tit.2014.2332531
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05) Stochastic approximation (62L20)
Related Items
Stochastic subspace correction in Hilbert space, Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent, Generalization properties of doubly stochastic learning algorithms, Nonparametric stochastic approximation with large step-sizes, Unnamed Item, An Online Projection Estimator for Nonparametric Regression in Reproducing Kernel Hilbert Spaces, Unnamed Item, On the Convergence of Stochastic Gradient Descent for Nonlinear Ill-Posed Problems, Online regularized learning algorithm for functional data, Convergence analysis of online learning algorithm with two-stage step size, Online Pairwise Learning Algorithms, Analysis of Online Composite Mirror Descent Algorithm, Unregularized online learning algorithms with general loss functions, On the regularizing property of stochastic gradient descent, Unnamed Item, Learning Theory of Randomized Sparse Kaczmarz Method, Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling, Fast and strong convergence of online learning algorithms, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Unnamed Item, A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids, An analysis of stochastic variance reduced gradient for linear inverse problems *