A Smooth Inexact Penalty Reformulation of Convex Problems with Linear Constraints
From MaRDI portal
Publication:5152474
DOI10.1137/18M1209180zbMath1477.90067arXiv1808.07749OpenAlexW3193317903MaRDI QIDQ5152474
Tatiana Tatarenko, Angelia Nedić
Publication date: 24 September 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1808.07749
Numerical mathematical programming methods (65K05) Convex programming (90C25) Large-scale problems in mathematical programming (90C06)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Minimizing finite sums with the stochastic average gradient
- Random algorithms for convex minimization problems
- Incremental proximal methods for large scale convex optimization
- A randomized Kaczmarz algorithm with exponential convergence
- Decomposition into functions in the minimization problem
- Incremental gradient algorithms with stepsizes bounded away from zero
- Incremental Subgradient Methods for Nondifferentiable Optimization
- Stochastic First-Order Methods with Random Constraint Projection
- Randomized Methods for Linear Constraints: Convergence Rates and Conditioning
- A Randomized Incremental Subgradient Method for Distributed Optimization in Networked Systems
- Incremental Subgradients for Constrained Convex Optimization: A Unified Framework and New Methods
- A New Class of Incremental Gradient Methods for Least Squares Problems
- The Linear l1 Estimator and the Huber M-Estimator
- An Incremental Gradient(-Projection) Method with Momentum Term and Adaptive Stepsize Rule
- Stochastic algorithms for exact and approximate feasibility of robust LMIs
- Convergence of Approximate and Incremental Subgradient Methods for Convex Optimization
- Approximations to Solutions to Systems of Linear Inequalities
- Incremental Least Squares Methods and the Extended Kalman Filter
- Katyusha: the first direct acceleration of stochastic gradient methods
- An Incremental Method for Solving Convex Finite Min-Max Problems
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization