A stochastic primal-dual method for a class of nonconvex constrained optimization
From MaRDI portal
Publication:2162528
DOI10.1007/s10589-022-00384-wzbMath1496.90067OpenAlexW4283397226MaRDI QIDQ2162528
Publication date: 8 August 2022
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10589-022-00384-w
complexitynonconvex optimizationaugmented Lagrangian functionstochastic gradient\(\epsilon\)-stationary point
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonconvex programming, global optimization (90C26) Stochastic programming (90C15) Developmental biology, pattern formation (92C15)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Saga
- Scenario approximation of robust and chance-constrained programs
- A sampling-and-discarding approach to chance-constrained optimization: feasibility and Optimality
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- An efficient algorithm for nonconvex-linear minimax optimization problem and its application in solving weighted maximin dispersion problem
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Algorithms for stochastic optimization with function or expectation constraints
- First-order and stochastic optimization methods for machine learning
- An augmented Lagrangian trust region method for equality constrained optimization
- Lagrange Multipliers and Optimality
- On the Ball-Constrained Weighted Maximin Dispersion Problem
- An augmented Lagrangian affine scaling method for nonlinear programming
- Variational Analysis
- A Level-Set Method for Convex Optimization with a Feasible Solution Path
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems
- Hybrid Block Successive Approximation for One-Sided Non-Convex Min-Max Problems: Algorithms and Applications
- Primal-Dual Stochastic Gradient Method for Convex Programs with Many Functional Constraints
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
- Stochastic proximal quasi-Newton methods for non-convex composite optimization
- A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Convex Relaxations of the Weighted Maxmin Dispersion Problem
- A Stochastic Approximation Method