Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
From MaRDI portal
Publication:4637017
zbMath1441.62215arXiv1602.05419MaRDI QIDQ4637017
Aymeric Dieuleveut, Francis Bach, Nicolas Flammarion
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1602.05419
convex optimizationaccelerated gradientstochastic gradientleast-squares regressionnon-parametric estimation
Computational methods for problems pertaining to statistics (62-08) Density estimation (62G07) Linear regression; mixed models (62J05) Stochastic approximation (62L20)
Related Items
Some Limit Properties of Markov Chains Induced by Recursive Stochastic Algorithms, Generalization properties of doubly stochastic learning algorithms, Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization, On the rates of convergence of parallelized averaged stochastic gradient algorithms, Bridging the gap between constant step size stochastic gradient descent and Markov chains, On the Adaptivity of Stochastic Gradient-Based Optimization, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Finite impulse response models: a non-asymptotic analysis of the least squares estimator, Concentration bounds for temporal difference learning with linear function approximation: the case of batch data and uniform sampling, Unnamed Item, Dimension independent excess risk by stochastic gradient descent, Dual Space Preconditioning for Gradient Descent, Unnamed Item, From inexact optimization to learning via gradient concentration
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Performance of empirical risk minimization in linear aggregation
- Nonparametric stochastic approximation with large step-sizes
- The lower tail of random quadratic forms with applications to ordinary least squares
- First-order methods of smooth convex optimization with inexact oracle
- Random design analysis of ridge regression
- An optimal method for stochastic composite optimization
- Model selection for regularized least-squares algorithm in learning theory
- Concentration inequalities and model selection. Ecole d'Eté de Probabilités de Saint-Flour XXXIII -- 2003.
- Online gradient descent learning algorithms
- Introductory lectures on convex optimization. A basic course.
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Optimal rates for the regularized least-squares algorithm
- On early stopping in gradient descent learning
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- Online Learning as Stochastic Approximation of Regularization Paths: Optimality and Almost-Sure Convergence
- Smooth Optimization with Approximate Gradient
- Support Vector Machines
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- On the Averaged Stochastic Approximation for Linear Regression
- An alternative point of view on Lepski's method
- Optimal Distributed Online Prediction using Mini-Batches
- Some methods of speeding up the convergence of iteration methods
- Theory of Reproducing Kernels
- A Stochastic Approximation Method
- Introduction to nonparametric estimation