Randomized smoothing variance reduction method for large-scale non-smooth convex optimization
From MaRDI portal
Publication:2033403
DOI10.1007/s43069-021-00059-yzbMath1470.90078OpenAlexW3181406714MaRDI QIDQ2033403
Publication date: 17 June 2021
Published in: SN Operations Research Forum (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s43069-021-00059-y
linear convergencevariance reductionstochastic gradient descentnon-smooth optimizationrandomized smoothing
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nonlinear total variation based noise removal algorithms
- Smooth minimization of non-smooth functions
- An approximate quasi-Newton bundle-type method for nonsmooth optimization
- Minimizing finite sums with the stochastic average gradient
- An algorithm for total variation minimization and applications
- Approximation analysis of gradient descent algorithm for bipartite ranking
- Linear convergence of epsilon-subgradient descent methods for a class of convex functions
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Fast proximal algorithms for nonsmooth convex optimization
- Randomized Smoothing for Stochastic Optimization
- Quasi-Newton Bundle-Type Methods for Nondifferentiable Convex Optimization
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Sparsity and Smoothness Via the Fused Lasso
- Survey of Bundle Methods for Nonsmooth Optimization
- Stochastic Approximation for Risk-Aware Markov Decision Processes
- Data-Driven Nonsmooth Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Online Learning with Kernels
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
This page was built for publication: Randomized smoothing variance reduction method for large-scale non-smooth convex optimization