Higher-order Newton methods with polynomial work per iteration
From MaRDI portal
Publication:6608710
DOI10.1016/j.aim.2024.109808MaRDI QIDQ6608710
Abraar Chaudhry, Jeffrey Zhang, Amir Ali Ahmadi
Publication date: 20 September 2024
Published in: Advances in Mathematics (Search for Journal in Brave)
Cites Work
- Unnamed Item
- Unnamed Item
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- A method for numerical integration on an automatic computer
- Semidefinite representation of convex sets
- Representation of nonnegative convex polynomials
- A Frank--Wolfe type theorem for convex polynomial programs
- DC decomposition of nonconvex polynomials with algebraic techniques
- NP-hardness of deciding convexity of quartic polynomials and related problems
- Complexity aspects of local minima and related notions
- On the complexity of finding a local minimizer of a quadratic function over a polytope
- Local convergence of tensor methods
- Lower bounds for finding stationary points I
- Implementable tensor methods in unconstrained convex optimization
- Variations and extension of the convex-concave procedure
- Cubic regularization of Newton method and its global performance
- On the method for numerical integration of Clenshaw and Curtis
- Global optimization with polynomials and the problem of moments
- A Complete Characterization of the Gap between Convexity and SOS-Convexity
- Convexity in SemiAlgebraic Geometry and Polynomial Optimization
- Some NP-complete problems in quadratic and nonlinear programming
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Trust Region Methods
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Semidefinite Programming
- Iterative Solution of Nonlinear Equations in Several Variables
- Scalable Semidefinite Programming
- An Optimal High-Order Tensor Method for Convex Optimization
- Tensor methods for finding approximate stationary points of convex functions
- Evaluation Complexity of Algorithms for Nonconvex Optimization: Theory, Computation and Perspectives
- A Unified Adaptive Tensor Approximation Scheme to Accelerate Composite Convex Optimization
- A concise second-order complexity analysis for unconstrained optimization using high-order regularized models
- Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints
- DSOS and SDSOS Optimization: More Tractable Alternatives to Sum of Squares and Semidefinite Optimization
- A Survey of the S-Lemma
This page was built for publication: Higher-order Newton methods with polynomial work per iteration