Proximal alternating penalty algorithms for nonsmooth constrained convex optimization
From MaRDI portal
Publication:1734766
DOI10.1007/s10589-018-0033-zzbMath1418.90199arXiv1711.01367OpenAlexW2893640118MaRDI QIDQ1734766
Publication date: 27 March 2019
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1711.01367
convergence rateconstrained convex optimizationfirst-order methodsquadratic penalty methodaccelerated schemeproximal alternating algorithm
Related Items (7)
A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates ⋮ Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization ⋮ An efficient adaptive accelerated inexact proximal point method for solving linearly constrained nonconvex composite problems ⋮ Robust multicategory support matrix machines ⋮ An adaptive primal-dual framework for nonsmooth convex minimization ⋮ Non-stationary First-Order Primal-Dual Algorithms with Faster Convergence Rates ⋮ A primal-dual flow for affine constrained convex optimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- On the rate of convergence of the proximal alternating linearized minimization algorithm for convex problems
- On the ergodic convergence rates of a first-order primal-dual algorithm
- Gradient methods for minimizing composite functions
- A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
- Fast alternating linearization methods for minimizing the sum of two convex functions
- Adaptive smoothing algorithms for nonsmooth composite convex minimization
- Interior-point Lagrangian decomposition method for separable convex optimization
- On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
- A proximal-based deomposition method for compositions method for convex minimization problems
- Introductory lectures on convex optimization. A basic course.
- Templates for convex cone problems with applications to sparse signal recovery
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Iteration-complexity of first-order penalty methods for convex programming
- A splitting algorithm for dual monotone inclusions involving cocoercive operators
- An adaptive primal-dual framework for nonsmooth convex minimization
- Adaptive restart for accelerated gradient schemes
- Linear convergence of first order methods for non-strongly convex optimization
- Lectures on Modern Convex Optimization
- On the $O(1/n)$ Convergence Rate of the Douglas–Rachford Alternating Direction Method
- Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
- Convergence Rate Analysis of the Forward-Douglas-Rachford Splitting Scheme
- Convergence Rate Analysis of Primal-Dual Splitting Schemes
- A General Framework for a Class of First Order Primal-Dual Algorithms for Convex Optimization in Imaging Science
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Applications of a Splitting Algorithm to Decomposition in Convex Programming and Variational Inequalities
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- Complexity of first-order inexact Lagrangian and penalty methods for conic convex programming
- Proximal Decomposition Via Alternating Linearization
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Fast Alternating Direction Optimization Methods
- An Accelerated Linearized Alternating Direction Method of Multipliers
- A Selective Linearization Method For Multiblock Convex Optimization
- Regularization and Variable Selection Via the Elastic Net
- Faster Convergence Rates of Relaxed Peaceman-Rachford and ADMM Under Regularity Assumptions
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: Proximal alternating penalty algorithms for nonsmooth constrained convex optimization