Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity
From MaRDI portal
Publication:4594841
DOI10.1080/10556788.2017.1296439zbMath1382.90079arXiv1309.0113OpenAlexW2962834995MaRDI QIDQ4594841
Anthony Man-Cho So, Zirui Zhou
Publication date: 24 November 2017
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1309.0113
logistic regressionleast squares regressionglobal error boundinexact gradient methodnon-asymptotic convergence rate
Related Items (10)
Accelerating incremental gradient optimization with curvature information ⋮ On the Estimation Performance and Convergence Rate of the Generalized Power Method for Phase Synchronization ⋮ Inexact gradient projection method with relative error tolerance ⋮ A linearly convergent stochastic recursive gradient method for convex optimization ⋮ RSG: Beating Subgradient Method without Smoothness and Strong Convexity ⋮ On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming ⋮ Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods ⋮ Convergence Analysis of Inexact Randomized Iterative Methods ⋮ Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem ⋮ Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
This page was built for publication: Non-asymptotic convergence analysis of inexact gradient methods for machine learning without strong convexity