Inertial accelerated augmented Lagrangian algorithms with scaling coefficients to solve exactly and inexactly linearly constrained convex optimization problems
DOI10.1016/J.CAM.2024.116425MaRDI QIDQ6664934
Author name not available (Why is that?), Rong Hu, Ya Ping Fang
Publication date: 16 January 2025
Published in: Journal of Computational and Applied Mathematics (Search for Journal in Brave)
convergence analysisaugmented Lagrangian methodinexactnessscaling coefficientlinearly constrained convex programming problem
Analysis of algorithms (68W40) Convex programming (90C25) Numerical methods involving duality (49M29) Methods of reduced gradient type (90C52)
Cites Work
- Title not available (Why is that?)
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated Bregman method for linearly constrained \(\ell _1-\ell _2\) minimization
- Variable metric quasi-Fejér monotonicity
- Customized proximal point algorithms for linearly constrained convex minimization and saddle-point problems: a unified approach
- Inexact accelerated augmented Lagrangian methods
- An accelerated inexact proximal point algorithm for convex minimization
- Accelerated linearized Bregman method
- Fast primal-dual algorithm via dynamical system for a linearly constrained convex optimization problem
- An inexact accelerated stochastic ADMM for separable convex optimization
- Inertial accelerated primal-dual methods for linear equality constrained convex optimization problems
- Improved convergence rates and trajectory convergence for primal-dual dynamical systems with vanishing damping
- Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity
- Multiplier and gradient methods
- The Bayesian elastic net
- Accelerated and inexact forward-backward algorithms
- Accelerated Uzawa methods for convex optimization
- A variational perspective on accelerated methods in optimization
- Accelerated Optimization for Machine Learning
- Accelerated First-Order Primal-Dual Proximal Methods for Linearly Constrained Composite Convex Programming
- “Second-Order Primal” + “First-Order Dual” Dynamical Systems With Time Scaling for Linear Equality Constrained Convex Optimization Problems
- Faster Lagrangian-Based Methods in Convex Optimization
- Alternating Direction Method of Multipliers for Machine Learning
- On the Nonergodic Convergence Rate of an Inexact Augmented Lagrangian Framework for Composite Convex Programming
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- A primal-dual flow for affine constrained convex optimization
- Reducing the Complexity of Two Classes of Optimization Problems by Inexact Accelerated Proximal Gradient Method
- Iteration-complexity of first-order augmented Lagrangian methods for convex programming
- Fast augmented Lagrangian method in the convex regime with convergence guarantees for the iterates
- A universal accelerated primal-dual method for convex optimization problems
- A New Insight on Augmented Lagrangian Method with Applications in Machine Learning
- Accelerated primal-dual methods with adaptive parameters for composite convex optimization with linear constraints
- DISA: a dual inexact splitting algorithm for distributed convex composite optimization
This page was built for publication: Inertial accelerated augmented Lagrangian algorithms with scaling coefficients to solve exactly and inexactly linearly constrained convex optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6664934)