An investigation of Newton-Sketch and subsampled Newton methods

From MaRDI portal
Publication:5135249

DOI10.1080/10556788.2020.1725751zbMath1454.90112arXiv1705.06211OpenAlexW3005768517MaRDI QIDQ5135249

Albert S. Berahas, Nocedal, Jorge, Raghu Bollapragada

Publication date: 19 November 2020

Published in: Optimization Methods and Software (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1705.06211



Related Items

Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic Hessian accuracy, Quasi-Newton methods for machine learning: forget the past, just sample, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Subsampled nonmonotone spectral gradient methods, Scalable subspace methods for derivative-free nonlinear least-squares optimization, An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians, Convergence analysis of a subsampled Levenberg-Marquardt algorithm, M-IHS: an accelerated randomized preconditioning method avoiding costly matrix decompositions, An overview of stochastic quasi-Newton methods for large-scale machine learning, Inexact restoration with subsampled trust-region methods for finite-sum minimization, Newton-MR: inexact Newton method with minimum residual sub-problem solver, Global optimization using random embeddings, Hessian averaging in stochastic Newton methods achieves superlinear convergence, Random projections of linear and semidefinite problems with linear inequalities, Discriminative Bayesian filtering lends momentum to the stochastic Newton method for minimizing log-convex functions, A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization, Unnamed Item, Convergence of Newton-MR under Inexact Hessian Information, Sub-sampled Newton methods, Stochastic proximal quasi-Newton methods for non-convex composite optimization, A robust multi-batch L-BFGS method for machine learning, An Inertial Newton Algorithm for Deep Learning, A Stochastic Semismooth Newton Method for Nonsmooth Nonconvex Optimization, Linesearch Newton-CG methods for convex optimization with noise, Convergence Analysis of Inexact Randomized Iterative Methods



Cites Work