A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization
DOI10.1007/s10589-024-00603-6MaRDI QIDQ6642793
Heng Huang, Chuan He, Zhaosong Lu
Publication date: 25 November 2024
Published in: Computational Optimization and Applications (Search for Journal in Brave)
barrier methodaugmented Lagrangian methoditeration complexityoperation complexitysecond-order stationary pointNewton-conjugate gradient methodnonconvex conic optimization
Analysis of algorithms and problem complexity (68Q25) Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Numerical methods based on necessary conditions (49M05) Newton-type methods (49M15) Mathematical programming (90Cxx)
Cites Work
- Unnamed Item
- Unnamed Item
- SDPNAL+: a majorized semismooth Newton-CG augmented Lagrangian method for semidefinite programming with nonnegative constraints
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Augmented Lagrangians with constrained subproblems and convergence to second-order stationary points
- On the global convergence of a modified augmented Lagrangian linesearch interior-point Newton method for nonlinear programming
- An augmented Lagrangian approach for sparse principal component analysis
- Reduced gradient method combined with augmented Lagrangian and barrier for the optimal power flow problem
- An interior-point algorithm for nonconvex nonlinear programming
- A modified barrier-augmented Lagrangian method for constrained minimization
- An augmented Lagrangian interior-point method using directions of negative curvature
- A primal-dual augmented Lagrangian penalty-interior-point filter line search algorithm
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- An example comparing the standard and safeguarded augmented Lagrangian methods
- Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints
- Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary
- A mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming
- An interior algorithm for nonlinear optimization that combines line search and trust region steps
- Cubic regularization of Newton method and its global performance
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- An Augmented Lagrangian Method for Non-Lipschitz Nonconvex Programming
- A Newton-CG Augmented Lagrangian Method for Semidefinite Programming
- On Augmented Lagrangian Methods with General Lower-Level Constraints
- A Trust Region Algorithm for Nonlinearly Constrained Optimization
- Estimating the Largest Eigenvalue by the Power and Lanczos Algorithms with a Random Start
- Sparse Reconstruction by Separable Approximation
- Accelerated Methods for NonConvex Optimization
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization
- A second-order sequential optimality condition associated to the convergence of optimization algorithms
- Sequential Quadratic Programming with Penalization of the Displacement
- A log-barrier Newton-CG method for bound constrained optimization with complexity guarantees
- Finding approximate local minima faster than gradient descent
- On the complexity of an augmented Lagrangian method for nonconvex optimization
- Rapid infeasibility detection in a mixed logarithmic barrier-augmented Lagrangian method for nonlinear optimization
- On the Complexity of an Inexact Restoration Method for Constrained Optimization
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Practical Augmented Lagrangian Methods for Constrained Optimization
- A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds
- Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization
- A new trust-region algorithm for equality constrained optimization
- A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees
- A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees
This page was built for publication: A Newton-CG based barrier-augmented Lagrangian method for general nonconvex conic optimization