Convergence of Newton-MR under Inexact Hessian Information
From MaRDI portal
Publication:5148404
DOI10.1137/19M1302211zbMath1458.90473arXiv1909.06224MaRDI QIDQ5148404
Publication date: 4 February 2021
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1909.06224
Numerical mathematical programming methods (65K05) Large-scale problems in mathematical programming (90C06) Nonlinear programming (90C30)
Related Items
MINRES: From Negative Curvature Detection to Monotonicity Properties, Hessian averaging in stochastic Newton methods achieves superlinear convergence, Linesearch Newton-CG methods for convex optimization with noise
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Stochastic derivative-free optimization using a trust region framework
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Complexity bounds for second-order optimality in unconstrained optimization
- Sample size selection in optimization methods for machine learning
- The optimal perturbation bounds of the Moore-Penrose inverse under the Frobenius norm
- Invexity and optimization
- On sufficiency of the Kuhn-Tucker conditions
- Convergence of quasi-Newton matrices generated by the symmetric rank one update
- Introductory lectures on convex optimization. A basic course.
- GMRES, L-curves, and discrete ill-posed problems
- Stochastic optimization using a trust-region method and random models
- Random perturbation of low rank matrices: improving classical bounds
- Sub-sampled Newton methods
- Newton-type methods for non-convex optimization under inexact Hessian information
- A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization
- Cubic regularization of Newton method and its global performance
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Stochastic Algorithms for Inverse Problems Involving PDEs and many Measurements
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Non-asymptotic theory of random matrices: extreme singular values
- MINRES-QLP: A Krylov Subspace Method for Indefinite or Singular Symmetric Systems
- On the Use of Stochastic Hessian Information in Optimization Methods for Machine Learning
- PERTURBATION ANALYSIS OF THE MOORE-PENROSE INVERSE FOR A CLASS OF BOUNDED OPERATORS IN HILBERT SPACES
- Rank Degeneracy
- What is invexity?
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Solution of Sparse Indefinite Systems of Linear Equations
- Trust Region Methods
- Complexity and global rates of trust-region methods based on probabilistic models
- ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization
- Accelerated Methods for NonConvex Optimization
- Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization
- Analysis of a Symmetric Rank-One Trust Region Method
- An investigation of Newton-Sketch and subsampled Newton methods
- A Note on Performance Profiles for Benchmarking Software
- Understanding Machine Learning
- A method for the solution of certain non-linear problems in least squares
- Exact and inexact subsampled Newton methods for optimization
- Benchmarking optimization software with performance profiles.