An adaptive primal-dual framework for nonsmooth convex minimization
DOI10.1007/s12532-019-00173-3zbMath1452.90246arXiv1808.04648OpenAlexW2983683136WikidataQ126855440 ScholiaQ126855440MaRDI QIDQ2220901
Volkan Cevher, Olivier Fercoq, Ahmet Alacaoglu, Quoc Tran Dinh
Publication date: 25 January 2021
Published in: Mathematical Programming Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1808.04648
augmented Lagrangiannonsmooth convex optimizationself-adaptive methodrestartingprimal-dual first-order methods
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items (10)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- A note on the convergence of ADMM for linearly constrained convex optimization problems
- A dual algorithm for the solution of nonlinear variational problems via finite element approximation
- First-order algorithms for convex optimization with nonseparable objective and coupled constraints
- Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
- A first-order primal-dual algorithm for convex problems with applications to imaging
- A double smoothing technique for solving unconstrained nondifferentiable convex optimization problems
- Iteration-complexity of first-order penalty methods for convex programming
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming
- Accelerated alternating direction method of multipliers: an optimal \(O(1 / K)\) nonergodic analysis
- A variable smoothing algorithm for solving convex optimization problems
- Adaptive restart for accelerated gradient schemes
- Multiplier and gradient methods
- Atomic Decomposition by Basis Pursuit
- Proximal Splitting Methods in Signal Processing
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Smoothing and First Order Methods: A Unified Framework
- Double Smoothing Technique for Large-Scale Linearly Constrained Convex Optimization
- Convergence Rate Analysis of the Forward-Douglas-Rachford Splitting Scheme
- Sparse and stable Markowitz portfolios
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Proximal Minimization Methods with Generalized Bregman Functions
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming
- Nonlinear Proximal Point Algorithms Using Bregman Functions, with Applications to Convex Programming
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
- Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC
- An Accelerated Linearized Alternating Direction Method of Multipliers
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Faster Convergence Rates of Relaxed Peaceman-Rachford and ADMM Under Regularity Assumptions
- Convex analysis and monotone operator theory in Hilbert spaces
- Compressed sensing
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
This page was built for publication: An adaptive primal-dual framework for nonsmooth convex minimization