A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
DOI10.1137/16M1093094zbMath1386.90109arXiv1507.06243OpenAlexW2962713896MaRDI QIDQ4600841
Volkan Cevher, Olivier Fercoq, Quoc Tran Dinh
Publication date: 17 January 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1507.06243
smoothing techniqueshomotopyaugmented Lagrangianparallel and distributed computationseparable convex minimizationgap reduction techniquefirst-order primal-dual methods
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items (22)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Smooth minimization of non-smooth functions
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators
- Dual extrapolation and its applications to solving variational inequalities and related problems
- On the sublinear convergence rate of multi-block ADMM
- A proximal-based deomposition method for compositions method for convex minimization problems
- Introductory lectures on convex optimization. A basic course.
- A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator
- A fast dual proximal gradient algorithm for convex minimization and applications
- Parallel multi-block ADMM with \(o(1/k)\) convergence
- The convex geometry of linear inverse problems
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Iteration-complexity of first-order penalty methods for convex programming
- Restarting the accelerated coordinate descent method with a rough strong convexity estimate
- A variable smoothing algorithm for solving convex optimization problems
- On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers
- Adaptive restart for accelerated gradient schemes
- On the global and linear convergence of the generalized alternating direction method of multipliers
- Smoothing alternating direction methods for fully nonsmooth constrained convex optimization
- Lectures on Modern Convex Optimization
- Iteration complexity analysis of dual first-order methods for conic convex programming
- A Variable Metric Extension of the Forward–Backward–Forward Algorithm for Monotone Operators
- Proximal Splitting Methods in Signal Processing
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Smoothing and First Order Methods: A Unified Framework
- Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
- Convergence Rate Analysis of the Forward-Douglas-Rachford Splitting Scheme
- Convergence Rate Analysis of Primal-Dual Splitting Schemes
- On the Complexity of the Hybrid Proximal Extragradient Method for the Iterates and the Ergodic Mean
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Complexity of Variants of Tseng's Modified F-B Splitting and Korpelevich's Methods for Hemivariational Inequalities with Applications to Saddle-point and Convex Optimization Problems
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Solving Multiple-Block Separable Convex Minimization Problems Using Two-Block Alternating Direction Method of Multipliers
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- Variational Analysis
- A First-Order Primal-Dual Algorithm with Linesearch
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Application of a Smoothing Technique to Decomposition in Convex Optimization
- Rate of Convergence Analysis of Decomposition Methods Based on the Proximal Method of Multipliers for Convex Minimization
- Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- An Accelerated Linearized Alternating Direction Method of Multipliers
- Iteration-Complexity of Block-Decomposition Algorithms and the Alternating Direction Method of Multipliers
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Faster Convergence Rates of Relaxed Peaceman-Rachford and ADMM Under Regularity Assumptions
- Composite Self-Concordant Minimization
- On the Global Linear Convergence of the ADMM with MultiBlock Variables
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
This page was built for publication: A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization