Deprecated: $wgMWOAuthSharedUserIDs=false is deprecated, set $wgMWOAuthSharedUserIDs=true, $wgMWOAuthSharedUserSource='local' instead [Called from MediaWiki\HookContainer\HookContainer::run in /var/www/html/w/includes/HookContainer/HookContainer.php at line 135] in /var/www/html/w/includes/Debug/MWDebug.php on line 372
A Proximal Stochastic Gradient Method with Progressive Variance Reduction - MaRDI portal

A Proximal Stochastic Gradient Method with Progressive Variance Reduction

From MaRDI portal
Publication:5245377

DOI10.1137/140961791zbMath1321.65016arXiv1403.4699OpenAlexW2047152541MaRDI QIDQ5245377

Tong Zhang, Lin Xiao

Publication date: 8 April 2015

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1403.4699



Related Items

Nonconvex optimization with inertial proximal stochastic variance reduction gradient, A stochastic variance reduced gradient using Barzilai-Borwein techniques as second order information, A mini-batch proximal stochastic recursive gradient algorithm with diagonal Barzilai-Borwein stepsize, A stochastic variance reduction algorithm with Bregman distances for structured composite problems, Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization, Block mirror stochastic gradient method for stochastic optimization, A line search based proximal stochastic gradient algorithm with dynamical variance reduction, Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification, An Asymptotic Analysis of Random Partition Based Minibatch Momentum Methods for Linear Regression Models, An inexact primal-dual smoothing framework for large-scale non-bilinear saddle point problems, Adaptive proximal SGD based on new estimating sequences for sparser ERM, Proximal stochastic recursive momentum algorithm for nonsmooth nonconvex optimization problems, Gradient complexity and non-stationary views of differentially private empirical risk minimization, Open issues and recent advances in DC programming and DCA, On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error, Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation, An inexact first-order method for constrained nonlinear optimization, Asynchronous variance-reduced block schemes for composite non-convex stochastic optimization: block-specific steplengths and adapted batch-sizes, Accelerating mini-batch SARAH by step size rules, Predictive stochastic programming, On data preconditioning for regularized loss minimization, An online conjugate gradient algorithm for large-scale data analysis in machine learning, An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization, MAGMA: Multilevel Accelerated Gradient Mirror Descent Algorithm for Large-Scale Convex Composite Minimization, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Unnamed Item, Semi-Infinite Linear Regression and Its Applications, An interior stochastic gradient method for a class of non-Lipschitz optimization problems, A stochastic primal-dual method for a class of nonconvex constrained optimization, A Variable Sample-Size Stochastic Quasi-Newton Method for Smooth and Nonsmooth Stochastic Convex Optimization, Proximal average approximated incremental gradient descent for composite penalty regularized empirical risk minimization, Unnamed Item, Unnamed Item, Unnamed Item, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, Statistical inference for model parameters in stochastic gradient descent, Accelerating incremental gradient optimization with curvature information, Improving kernel online learning with a snapshot memory, Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methods, A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems, An aggressive reduction on the complexity of optimization for non-strongly convex objectives, Convergence rates of accelerated proximal gradient algorithms under independent noise, Accelerated randomized mirror descent algorithms for composite non-strongly convex optimization, Linear convergence of cyclic SAGA, Trimmed Statistical Estimation via Variance Reduction, Unnamed Item, An accelerated variance reducing stochastic method with Douglas-Rachford splitting, Inexact proximal stochastic gradient method for convex composite optimization, Importance sampling in signal processing applications, Inexact version of Bregman proximal gradient algorithm, Accelerated stochastic variance reduction for a class of convex optimization problems, Batched Stochastic Gradient Descent with Weighted Sampling, Accelerated dual-averaging primal–dual method for composite convex minimization, Inexact proximal stochastic second-order methods for nonconvex composite optimization, Multilevel Stochastic Gradient Methods for Nested Composition Optimization, A linearly convergent stochastic recursive gradient method for convex optimization, Unnamed Item, Variance-Based Modified Backward-Forward Algorithm with Line Search for Stochastic Variational Inequality Problems and Its Applications, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate, Improved SVRG for finite sum structure optimization with application to binary classification, Unnamed Item, Minimizing finite sums with the stochastic average gradient, Stochastic gradient method with Barzilai-Borwein step for unconstrained nonlinear optimization, Stochastic variance reduced gradient methods using a trust-region-like scheme, Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity, A Tight Bound of Hard Thresholding, Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice, Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization, Incremental quasi-subgradient methods for minimizing the sum of quasi-convex functions, Unnamed Item, Unnamed Item, Random Gradient Extrapolation for Distributed and Stochastic Optimization, Unnamed Item, Stochastic proximal quasi-Newton methods for non-convex composite optimization, A unified convergence analysis of stochastic Bregman proximal gradient and extragradient methods, General convergence analysis of stochastic first-order methods for composite optimization, Randomized smoothing variance reduction method for large-scale non-smooth convex optimization, Stochastic sub-sampled Newton method with variance reduction, Stochastic quasi-gradient methods: variance reduction via Jacobian sketching, Accelerated proximal incremental algorithm schemes for non-strongly convex functions, Search Direction Correction with Normalized Gradient Makes First-Order Methods Faster, Provable accelerated gradient method for nonconvex low rank optimization, On the linear convergence of the stochastic gradient method with constant step-size, Coordinate descent with arbitrary sampling I: algorithms and complexity, An Inexact Variable Metric Proximal Point Algorithm for Generic Quasi-Newton Acceleration, Riemannian Stochastic Variance Reduced Gradient Algorithm with Retraction and Vector Transport, MultiLevel Composite Stochastic Optimization via Nested Variance Reduction, Accelerate stochastic subgradient method by leveraging local growth condition, Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization, Stochastic primal dual fixed point method for composite optimization, A randomized incremental primal-dual method for decentralized consensus optimization, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, Dualize, split, randomize: toward fast nonsmooth optimization algorithms, High-dimensional model recovery from random sketched data by exploring intrinsic sparsity, Unnamed Item, Unnamed Item, Unnamed Item, Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization, Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization, Kalman-Based Stochastic Gradient Method with Stop Condition and Insensitivity to Conditioning, Asymptotic Results of Stochastic Decomposition for Two-Stage Stochastic Quadratic Programming, High-performance statistical computing in the computing environments of the 2020s, Unnamed Item, A Stochastic Variance Reduced Primal Dual Fixed Point Method for Linearly Constrained Separable Optimization, Unnamed Item, A stochastic Nesterov's smoothing accelerated method for general nonsmooth constrained stochastic composite convex optimization, On stochastic Kaczmarz type methods for solving large scale systems of ill-posed equations, Bregman Finito/MISO for Nonconvex Regularized Finite Sum Minimization without Lipschitz Gradient Continuity, A Stochastic Proximal Alternating Minimization for Nonsmooth and Nonconvex Optimization, Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, Accelerating variance-reduced stochastic gradient methods