On Regularization and Active-set Methods with Complexity for Constrained Optimization
From MaRDI portal
Publication:4641664
DOI10.1137/17M1127107zbMath1390.90512OpenAlexW2801742348WikidataQ111288279 ScholiaQ111288279MaRDI QIDQ4641664
Ernesto G. Birgin, José Mario Martínez
Publication date: 18 May 2018
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/17m1127107
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Abstract computational complexity for mathematical programming problems (90C60) Nonlinear programming (90C30) Numerical methods based on nonlinear programming (49M37)
Related Items
Block coordinate descent for smooth nonconvex constrained minimization, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization, Worst-case evaluation complexity of a quadratic penalty method for nonconvex optimization, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization, On constrained optimization with nonconvex regularization, On the Complexity of an Inexact Restoration Method for Constrained Optimization, On large-scale unconstrained optimization and arbitrary regularization, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, Complexity and performance of an Augmented Lagrangian algorithm
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- Partial spectral projected gradient method with active-set strategy for linearly constrained optimization
- Optimal quadratic programming algorithms. With applications to variational inequalities
- A practical optimality condition without constraint qualifications for nonlinear programming
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Large-scale active-set box-constrained optimization method with spectral projected gradients
- On the convergence of projected gradient processes to singular critical points
- Evaluating bound-constrained minimization software
- Evaluation complexity bounds for smooth constrained nonlinear optimization using scaled KKT conditions and high-order models
- CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization
- Cubic regularization of Newton method and its global performance
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- Remark on “algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization”
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Projector preconditioning for partially bound-constrained quadratic optimization
- Global Convergence of a Class of Trust Region Algorithms for Optimization with Simple Bounds
- Newton’s Method and the Goldstein Step-Length Rule for Constrained Minimization Problems
- Global and Asymptotic Convergence Rate Estimates for a Class of Projected Gradient Processes
- Large-scale linearly constrained optimization
- Box Constrained Quadratic Programming with Proportioning and Projections
- Inexact spectral projected gradient methods on convex sets
- Trust Region Methods
- Nonmonotone Spectral Projected Gradient Methods on Convex Sets
- On High-order Model Regularization for Constrained Optimization
- ARCq: a new adaptive regularization by cubics
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Spectral projected gradient and variable metric methods for optimization with linear inequalities
- On the numerical solution of bound constrained optimization problems
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Algorithm 813
- Practical active-set Euclidian trust-region method with spectral projected gradients for bound-constrained minimization
- Practical Augmented Lagrangian Methods for Constrained Optimization
- Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians
- On sequential optimality conditions for smooth constrained optimization