On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
From MaRDI portal
Publication:2815548
DOI10.1080/10556788.2015.1130129zbMath1350.90032OpenAlexW2337889385MaRDI QIDQ2815548
Jin Yun Yuan, Ya-Xiang Yuan, Geovani Nunes Grapiglia
Publication date: 29 June 2016
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2015.1130129
Numerical mathematical programming methods (65K05) Multi-objective and goal programming (90C29) Nonlinear programming (90C30) Newton-type methods (49M15) Numerical methods based on nonlinear programming (49M37)
Related Items
A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization, Complexity bound of trust-region methods for convex smooth unconstrained multiobjective optimization, On the worst-case evaluation complexity of non-monotone line search algorithms, Newton-type methods for non-convex optimization under inexact Hessian information, Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
Cites Work
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- On a global complexity bound of the Levenberg-marquardt method
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Worst case complexity of direct search
- Convergence rate of the trust region method for nonlinear equations under local error bound condition
- A new trust region method for nonlinear equations
- A quasi-Newton trust region method with a new conic model for the unconstrained optimization
- Cubic regularization of Newton method and its global performance
- Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
- On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization
- Convergence of a Regularized Euclidean Residual Algorithm for Nonlinear Least-Squares
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Recursive Trust-Region Methods for Multiscale Nonlinear Optimization
- Trust Region Methods
- Nonlinear stepsize control, trust regions and regularizations for unconstrained optimization
- Worst-case evaluation complexity of non-monotone gradient-related algorithms for unconstrained optimization
- Modified Gauss–Newton scheme with worst case guarantees for global performance
- Worst case complexity of direct search under convexity