Near-Optimal Hyperfast Second-Order Method for Convex Optimization
From MaRDI portal
Publication:4965110
DOI10.1007/978-3-030-58657-7_15zbMath1460.90133arXiv2002.09050OpenAlexW3084935096MaRDI QIDQ4965110
Publication date: 25 February 2021
Published in: Mathematical Optimization Theory and Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2002.09050
Related Items (4)
Local convergence of tensor methods ⋮ Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization ⋮ Super-Universal Regularized Newton Method ⋮ A control-theoretic perspective on optimal high-order optimization
Cites Work
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Lectures on convex optimization
- Lower bounds for finding stationary points I
- Cubic regularization of Newton method and its global performance
- An Accelerated Hybrid Proximal Extragradient Method for Convex Optimization and Its Implications to Second-Order Methods
- Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions
- Tensor Methods for Minimizing Convex Functions with Hölder Continuous Higher-Order Derivatives
- Contracting Proximal Methods for Smooth Convex Optimization
This page was built for publication: Near-Optimal Hyperfast Second-Order Method for Convex Optimization