Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization
From MaRDI portal
Publication:1675558
DOI10.1007/s10898-016-0475-8zbMath1379.90029OpenAlexW2530585400MaRDI QIDQ1675558
José Mario Martínez, Marcos Raydan
Publication date: 2 November 2017
Published in: Journal of Global Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10898-016-0475-8
Related Items
On high-order model regularization for multiobjective optimization, A cubic regularization of Newton's method with finite difference Hessian approximations, Unnamed Item, A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization, On the use of third-order models with fourth-order regularization for unconstrained optimization, A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization, A filter sequential adaptive cubic regularization algorithm for nonlinear constrained optimization, A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees, A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees, Run-and-inspect method for nonconvex optimization and global optimality bounds for R-local minimizers, On the worst-case evaluation complexity of non-monotone line search algorithms, Adaptive Third-Order Methods for Composite Convex Optimization, On High-order Model Regularization for Constrained Optimization, Gradient regularization of Newton method with Bregman distances, A sequential adaptive regularisation using cubics algorithm for solving nonlinear equality constrained optimization, A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis, Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis, Cubic regularization in symmetric rank-1 quasi-Newton methods, On Regularization and Active-set Methods with Complexity for Constrained Optimization, Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization, Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality, Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions, Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization, Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians, An efficient nonmonotone adaptive cubic regularization method with line search for unconstrained optimization problem, Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization, A generalized worst-case complexity analysis for non-monotone line searches, A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds, A concise second-order complexity analysis for unconstrained optimization using high-order regularized models, A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization, Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization, On large-scale unconstrained optimization and arbitrary regularization, Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact, Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary, On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization, Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization, The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization
Uses Software
Cites Work
- Unnamed Item
- A trust region algorithm with adaptive cubic regularization methods for nonsmooth convex minimization
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Updating the regularization parameter in the adaptive cubic regularization algorithm
- Separable cubic modeling and a trust-region strategy for unconstrained minimization with impact in global optimization
- Accelerating the cubic regularization of Newton's method on convex problems
- Nonlinear programming algorithms using trust regions and augmented Lagrangians with nonmonotone penalty parameters.
- Algebraic rules for quadratic regularization of Newton's method
- Cubic regularization of Newton method and its global performance
- On the use of iterative methods in cubic regularization for unconstrained optimization
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- A Global Convergence Theory for General Trust-Region-Based Algorithms for Equality Constrained Optimization
- A Robust Trust-Region Algorithm with a Nonmonotonic Penalty Parameter Scheme for Constrained Optimization
- Practical Augmented Lagrangian Methods for Constrained Optimization
- Inexact-restoration method with Lagrangian tangent decrease and new merit function for nonlinear programming.