Inexact proximal stochastic gradient method for convex composite optimization
DOI10.1007/s10589-017-9932-7zbMath1390.90432OpenAlexW2745037491MaRDI QIDQ1694394
Xiao Wang, Shuxiong Wang, Hongchao Zhang
Publication date: 1 February 2018
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-017-9932-7
global convergencecomplexity boundempirical risk minimizationconvex composite optimizationstochastic gradientinexact methods
Convex programming (90C25) Numerical optimization and variational techniques (65K10) Applications of operator theory in optimization, convex analysis, mathematical programming, economics (47N10)
Related Items (6)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- CUR matrix decompositions for improved data analysis
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Practical inexact proximal quasi-Newton method with global complexity analysis
- Gradient methods for minimizing composite functions
- An optimal method for stochastic composite optimization
- Minimizing finite sums with the stochastic average gradient
- Fixed point and Bregman iterative methods for matrix rank minimization
- Incremental proximal methods for large scale convex optimization
- Incremental constraint projection methods for variational inequalities
- Stochastic First-Order Methods with Random Constraint Projection
- Accelerated and Inexact Forward-Backward Algorithms
- A Singular Value Thresholding Algorithm for Matrix Completion
- MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization
- Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization
- An Inexact Accelerated Proximal Gradient Method for Large Scale Linearly Constrained Convex SDP
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Total Variation Projection With First Order Schemes
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Model Selection and Estimation in Regression with Grouped Variables
This page was built for publication: Inexact proximal stochastic gradient method for convex composite optimization