Dual descent augmented Lagrangian method and alternating direction method of multipliers
From MaRDI portal
Publication:6542544
DOI10.1137/21m1449099zbMATH Open1539.65068MaRDI QIDQ6542544
Publication date: 22 May 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Optimality conditions and duality in mathematical programming (90C46)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- On the linear convergence of the alternating direction method of multipliers
- A three-operator splitting scheme and its optimization applications
- Monotone splitting sequential quadratic optimization algorithm with applications in electric power systems
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- Global convergence of unmodified 3-block ADMM for a class of convex minimization problems
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis
- Global convergence of ADMM in nonconvex nonsmooth optimization
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
- Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization
- Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints
- A QCQP-based splitting SQP algorithm for two-block nonconvex constrained optimization problems with application
- On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers
- Perturbed proximal primal-dual algorithm for nonconvex nonsmooth optimization
- Multiplier and gradient methods
- The multiplier method of Hestenes and Powell applied to convex programming
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- A two-level distributed algorithm for nonconvex constrained optimization
- A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
- The Numerical Solution of Parabolic and Elliptic Differential Equations
- On the Numerical Solution of Heat Conduction Problems in Two and Three Space Variables
- On Augmented Lagrangian Methods with General Lower-Level Constraints
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- On the Linear Convergence of the ADMM in Decentralized Consensus Optimization
- Convergence rate bounds for a proximal ADMM with over-relaxation stepsize parameter for solving nonconvex linearly constrained problems
- Sequential Quadratic Optimization for Nonlinear Equality Constrained Stochastic Optimization
- Penalty Dual Decomposition Method for Nonsmooth Nonconvex Optimization—Part I: Algorithms and Convergence Analysis
- Penalty Dual Decomposition Method for Nonsmooth Nonconvex Optimization—Part II: Applications
- A Proximal Alternating Direction Method of Multiplier for Linearly Constrained Nonconvex Minimization
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Convex Analysis
- A Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex Optimization
- Iteration Complexity of an Inner Accelerated Inexact Proximal Augmented Lagrangian Method Based on the Classical Lagrangian Function
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- A First-Order Primal-Dual Method for Nonconvex Constrained Optimization Based on the Augmented Lagrangian
- Iteration Complexity of a Proximal Augmented Lagrangian Method for Solving Nonconvex Composite Optimization Problems with Nonlinear Convex Constraints
Related Items (1)
This page was built for publication: Dual descent augmented Lagrangian method and alternating direction method of multipliers