Newton-MR: inexact Newton method with minimum residual sub-problem solver
From MaRDI portal
Publication:6114941
DOI10.1016/j.ejco.2022.100035arXiv1810.00303OpenAlexW3206532668MaRDI QIDQ6114941
No author found.
Publication date: 12 July 2023
Published in: EURO Journal on Computational Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1810.00303
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Nonlinear programming (90C30) Methods of quasi-Newton type (90C53)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Recent advances in numerical methods for nonlinear equations and nonlinear least squares
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Truncated regularized Newton method for convex minimizations
- On the convergence of an inexact Newton-type method
- A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations
- Invexity and optimization
- Regularization methods for uniformly rank-deficient nonlinear least-squares problems
- Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization
- On sufficiency of the Kuhn-Tucker conditions
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Introductory lectures on convex optimization. A basic course.
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- Random perturbation of low rank matrices: improving classical bounds
- Sub-sampled Newton methods
- A unified local convergence analysis of inexact constrained Levenberg-Marquardt methods
- On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption
- Superlinear convergence of a Newton-type algorithm for monotone equations
- Regularized Newton methods for convex minimization problems with singular solutions
- Newton-like methods for solving underdetermined nonlinear equations with nondifferentiable terms
- Loss landscapes and optimization in over-parameterized non-linear systems and neural networks
- Newton-type methods for non-convex optimization under inexact Hessian information
- Local convergence analysis of the Levenberg-Marquardt framework for nonzero-residue nonlinear least-squares problems under an error bound condition
- Restricted strong convexity and its applications to convergence analysis of gradient-type methods in convex optimization
- Cubic regularization of Newton method and its global performance
- Optimization theory and methods. Nonlinear programming
- On the use of iterative methods in cubic regularization for unconstrained optimization
- A Globally Convergent Newton-GMRES Subspace Method for Systems of Nonlinear Equations
- Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions
- Global complexity bound of the Levenberg–Marquardt method
- Convergence of a Regularized Euclidean Residual Algorithm for Nonlinear Least-Squares
- MINRES-QLP: A Krylov Subspace Method for Indefinite or Singular Symmetric Systems
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- The modified Levenberg-Marquardt method for nonlinear equations with cubic convergence
- Euclidean-Norm Error Bounds for SYMMLQ and CG
- Least-Change Secant Update Methods for Underdetermined Systems
- Hybrid Krylov Methods for Nonlinear Systems of Equations
- The Conjugate Gradient Method and Trust Regions in Large Scale Optimization
- What is invexity?
- Inexact Newton Methods
- Solution of Sparse Indefinite Systems of Linear Equations
- Convergence behaviour of inexact Newton methods
- Convergence Theory of Nonlinear Newton–Krylov Algorithms
- Globally Convergent Inexact Newton Methods
- Trust Region Methods
- Degenerate Nonlinear Programming with a Quadratic Growth Condition
- trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem
- Solving the Trust-Region Subproblem using the Lanczos Method
- Choosing the Forcing Terms in an Inexact Newton Method
- Critical Point-Finding Methods Reveal Gradient-Flat Regions of Deep Network Losses
- An investigation of Newton-Sketch and subsampled Newton methods
- A Note on Performance Profiles for Benchmarking Software
- Modified Gauss–Newton scheme with worst case guarantees for global performance
- Approximate Gauss–Newton Methods for Nonlinear Least Squares Problems
- Understanding Machine Learning
- Algorithm 937
- Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization
- Exact and inexact subsampled Newton methods for optimization
- Benchmarking optimization software with performance profiles.