Trust region-type method under inexact gradient and inexact Hessian with convergence analysis
From MaRDI portal
Publication:6665208
DOI10.12286/jssx.j2022-0960MaRDI QIDQ6665208
Publication date: 17 January 2025
Published in: Mathematica Numerica Sinica (Search for Journal in Brave)
Cites Work
- Unnamed Item
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Complexity bounds for second-order optimality in unconstrained optimization
- Sub-sampled Newton methods
- Newton-type methods for non-convex optimization under inexact Hessian information
- Cubic regularization of Newton method and its global performance
- On the use of iterative methods in cubic regularization for unconstrained optimization
- On the worst-case complexity of nonlinear stepsize control algorithms for convex unconstrained optimization
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Iterative Methods for Finding a Trust-region Step
- Newton’s Method with a Model Trust Region Modification
- Solving the Trust-Region Subproblem using the Lanczos Method
- A Stochastic Approximation Method
This page was built for publication: Trust region-type method under inexact gradient and inexact Hessian with convergence analysis