Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
From MaRDI portal
Publication:5883312
DOI10.1137/22M1469584OpenAlexW4317035528MaRDI QIDQ5883312
Publication date: 30 March 2023
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/22m1469584
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Smooth minimization of non-smooth functions
- Sparse inverse covariance estimation with the graphical lasso
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Gradient sliding for composite optimization
- Gradient methods for minimizing composite functions
- Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization
- Smoothing proximal gradient method for general structured sparse regression
- Adaptive inexact fast augmented Lagrangian methods for constrained convex optimization
- Inexact accelerated augmented Lagrangian methods
- Accelerated schemes for a class of variational inequalities
- Coordinate-friendly structures, algorithms and applications
- A first-order primal-dual algorithm for convex problems with applications to imaging
- Accelerated linearized Bregman method
- Iteration-complexity of first-order penalty methods for convex programming
- Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems
- Accelerated gradient sliding for structured convex optimization
- Accelerated inexact composite gradient methods for nonconvex spectral optimization problems
- Inertial accelerated primal-dual methods for linear equality constrained convex optimization problems
- Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems
- Iteration complexity of inexact augmented Lagrangian methods for constrained convex programming
- Communication-efficient algorithms for decentralized and stochastic optimization
- Proximal gradient method for huberized support vector machine
- An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
- Multiplier and gradient methods
- Exact matrix completion via convex optimization
- Conditional Gradient Sliding for Convex Optimization
- Accelerated and Inexact Forward-Backward Algorithms
- A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
- Smoothing and First Order Methods: A Unified Framework
- Accelerated Uzawa methods for convex optimization
- Robust principal component analysis?
- Analysis and Generalizations of the Linearized Bregman Method
- On Degrees of Freedom of Projection Estimators With Applications to Multivariate Nonparametric Regression
- Algorithms for Fitting the Constrained Lasso
- An Accelerated HPE-Type Algorithm for a Class of Composite Convex-Concave Saddle-Point Problems
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
- Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems
- Sparsity and Smoothness Via the Fused Lasso
- An Inexact Accelerated Proximal Gradient Method for Large Scale Linearly Constrained Convex SDP
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- A Primal-Dual Algorithm with Line Search for General Convex-Concave Saddle Point Problems
- Faster Lagrangian-Based Methods in Convex Optimization
- Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems
- First-Order Methods for Problems with $O$(1) Functional Constraints Can Have Almost the Same Convergence Rate as for Unconstrained Problems
- Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters
- Complexity of a Quadratic Penalty Accelerated Inexact Proximal Point Method for Solving Linearly Constrained Nonconvex Composite Programs
- Optimal Primal-Dual Methods for a Class of Saddle Point Problems
- Excessive Gap Technique in Nonsmooth Convex Minimization
- Model Selection and Estimation in Regression with Grouped Variables
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
This page was built for publication: Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method