Stochastic first-order methods for convex and nonconvex functional constrained optimization
From MaRDI portal
Publication:2689819
DOI10.1007/s10107-021-01742-yOpenAlexW2976677498MaRDI QIDQ2689819
Digvijay Boob, Qi Deng, Guanghui Lan
Publication date: 14 March 2023
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.02734
accelerationstochastic algorithmsfunctional constrained optimizationconvex and nonconvex optimization
Semidefinite programming (90C22) Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Numerical methods based on nonlinear programming (49M37)
Related Items (5)
First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems ⋮ An accelerated inexact dampened augmented Lagrangian method for linearly-constrained nonconvex composite optimization problems ⋮ Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization ⋮ Unnamed Item ⋮ Provably training overparameterized neural network classifiers with non-convex constraints
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Lectures on convex optimization
- A practical optimality condition without constraint qualifications for nonlinear programming
- Level-set methods for convex optimization
- Non-Euclidean restricted memory level method for large-scale convex optimization
- An optimal randomized incremental gradient method
- New variants of bundle methods
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Iteration-complexity of first-order penalty methods for convex programming
- Communication-efficient algorithms for decentralized and stochastic optimization
- First-order and stochastic optimization methods for machine learning
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- The Fritz John necessary optimality conditions in the presence of equality and inequality constraints
- Penalty methods with stochastic approximation for stochastic nonlinear programming
- New Proximal Point Algorithms for Convex Minimization
- A Level-Set Method for Convex Optimization with a Feasible Solution Path
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Strict Constraint Qualifications and Sequential Optimality Conditions for Constrained Optimization
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- On sequential optimality conditions for smooth constrained optimization
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
This page was built for publication: Stochastic first-order methods for convex and nonconvex functional constrained optimization