Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization
From MaRDI portal
Publication:2001208
DOI10.1016/j.jco.2018.11.001zbMath1415.90118arXiv1705.07285OpenAlexW2963733310WikidataQ128957479 ScholiaQ128957479MaRDI QIDQ2001208
Publication date: 2 July 2019
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1705.07285
Related Items (6)
A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees ⋮ A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees ⋮ A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis ⋮ Complexity of proximal augmented Lagrangian for nonconvex optimization with nonlinear equality constraints ⋮ Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints ⋮ Ghost Penalties in Nonconvex Constrained Optimization: Diminishing Stepsizes and Iteration Complexity
Cites Work
- Unnamed Item
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Practical inexact proximal quasi-Newton method with global complexity analysis
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Metric regularity, tangent sets, and second-order optimality conditions
- On a global complexity bound of the Levenberg-marquardt method
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization
- Worst case complexity of direct search
- The \(p\)th-order optimality conditions for inequality constrained optimization problems
- An envelope-like effect of infinitely many inequality constraints on second-order necessary conditions for minimization problems
- Global convergence rate analysis of unconstrained optimization methods based on probabilistic models
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- On the complexity of finding first-order critical points in constrained nonlinear optimization
- Cubic regularization of Newton method and its global performance
- Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization
- Evaluation Complexity for Nonlinear Constrained Optimization Using Unscaled KKT Conditions and High-Order Models
- Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- Worst-Case Complexity of Smoothing Quadratic Regularization Methods for Non-Lipschitzian Optimization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Linearly Constrained Non-Lipschitz Optimization for Image Restoration
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Optimality Conditions for Degenerate Extremum Problems with Equality Constraints
- Trust Region Methods
- On High-order Model Regularization for Constrained Optimization
- An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
- Second Order Optimality Conditions Based on Parabolic Second Order Tangent Sets
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- On the Evaluation Complexity of Constrained Nonlinear Least-Squares and General Constrained Nonlinear Optimization Using Second-Order Methods
- On Nesterov's smooth Chebyshev–Rosenbrock function
- Direct Search Based on Probabilistic Descent
- Convex Analysis
- Point-to-Set Maps in Mathematical Programming
This page was built for publication: Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization