A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
From MaRDI portal
Publication:5210739
DOI10.1080/10556788.2019.1678033zbMath1439.90056OpenAlexW2982216356MaRDI QIDQ5210739
No author found.
Publication date: 21 January 2020
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://ora.ox.ac.uk/objects/uuid:09b543f4-83e2-4e03-9846-84cab74dad6c
Abstract computational complexity for mathematical programming problems (90C60) Nonconvex programming, global optimization (90C26)
Related Items
Smoothness parameter of power of Euclidean norm ⋮ A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization ⋮ Efficiency of higher-order algorithms for minimizing composite functions ⋮ A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization ⋮ A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization ⋮ Recent Theoretical Advances in Non-Convex Optimization ⋮ Adaptive regularization with cubics on manifolds ⋮ Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints ⋮ An adaptive high order method for finding third-order critical points of nonconvex optimization
Cites Work
- Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- Introductory lectures on convex optimization. A basic course.
- Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- Cubic regularization of Newton method and its global performance
- Trust Region Methods
- Finding approximate local minima faster than gradient descent
- Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints
- Gradient Descent Finds the Cubic-Regularized Nonconvex Newton Step
- The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
This page was built for publication: A concise second-order complexity analysis for unconstrained optimization using high-order regularized models