Empirical risk minimization: probabilistic complexity and stepsize strategy
From MaRDI portal
Publication:2419551
DOI10.1007/s10589-019-00080-2zbMath1422.90036OpenAlexW2920685183WikidataQ128287949 ScholiaQ128287949MaRDI QIDQ2419551
Publication date: 13 June 2019
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-019-00080-2
Uses Software
Cites Work
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient methods for minimizing composite functions
- Universal gradient methods for convex optimization problems
- User-friendly tail bounds for sums of random matrices
- Introductory lectures on convex optimization. A basic course.
- A Gauss-Newton method for convex composite optimization
- Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming
- Pivotal estimation via square-root lasso in nonparametric regression
- Efficient block-coordinate descent algorithms for the group Lasso
- Coordinate descent with arbitrary sampling II: expected separable overapproximation
- Sparse Matrix Inversion with Scaled Lasso
- Proximal Newton-Type Methods for Minimizing Composite Functions
- Non-asymptotic theory of random matrices: extreme singular values
- Bounding the Smallest Singular Value of a Random Matrix Without Concentration
- An Efficient Trust Region Algorithm for Minimizing Nondifferentiable Composite Functions
- On the Nonasymptotic Convergence of Cyclic Coordinate Descent Methods
- On the Convergence of Block Coordinate Descent Type Methods
- Stable signal recovery from incomplete and inaccurate measurements
- Understanding Machine Learning
- Compressed sensing
This page was built for publication: Empirical risk minimization: probabilistic complexity and stepsize strategy