The exact worst-case convergence rate of the alternating direction method of multipliers
From MaRDI portal
Publication:6634526
DOI10.1007/s10107-023-02037-0MaRDI QIDQ6634526
Moslem Zamani, Hadi Abbaszadehpeivasti, E. de Klerk
Publication date: 7 November 2024
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
convergence ratealternating direction method of multipliers (ADMM)performance estimationPŁ inequality
Semidefinite programming (90C22) Convex programming (90C25) Numerical methods for variational inequalities and related problems (65K15)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- OSQP: An Operator Splitting Solver for Quadratic Programs
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Nonlinear total variation based noise removal algorithms
- Optimized first-order methods for smooth convex minimization
- Fast alternating linearization methods for minimizing the sum of two convex functions
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- On the linear convergence of the alternating direction method of multipliers
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- From error bounds to the complexity of first-order descent methods for convex functions
- A survey on some recent developments of alternating direction method of multipliers
- Performance of first-order methods for smooth convex minimization: a novel approach
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
- On the global and linear convergence of the generalized alternating direction method of multipliers
- Linear convergence of first order methods for non-strongly convex optimization
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Linear Convergence and Metric Selection for Douglas-Rachford Splitting and ADMM
- Partial Error Bound Conditions and the Linear Convergence Rate of the Alternating Direction Method of Multipliers
- First-Order Methods in Optimization
- Relatively Smooth Convex Optimization by First-Order Methods, and Applications
- Faster Lagrangian-Based Methods in Convex Optimization
- Alternating Direction Method of Multipliers for Machine Learning
- Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection
- Fast Alternating Direction Optimization Methods
- Linear Rate Convergence of the Alternating Direction Method of Multipliers for Convex Composite Programming
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Faster Convergence Rates of Relaxed Peaceman-Rachford and ADMM Under Regularity Assumptions
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Convex Analysis
- A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
- Numerical optimization. Theoretical and practical aspects. Transl. from the French
- Conditions for linear convergence of the gradient method for non-convex optimization
Related Items (1)
This page was built for publication: The exact worst-case convergence rate of the alternating direction method of multipliers