DOI10.1007/s10107-016-1026-2zbMath1360.49020OpenAlexW2398500126MaRDI QIDQ517288
Frank E. Curtis, Daniel P. Robinson, Mohammadreza Samadi
Publication date: 23 March 2017
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10107-016-1026-2
On high-order model regularization for multiobjective optimization,
A cubic regularization of Newton's method with finite difference Hessian approximations,
Concise complexity analyses for trust region methods,
Unnamed Item,
Unnamed Item,
A Proximal Quasi-Newton Trust-Region Method for Nonsmooth Regularized Optimization,
An active set trust-region method for bound-constrained optimization,
A note on the worst-case complexity of nonlinear stepsize control methods for convex smooth unconstrained optimization,
Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization,
On the use of third-order models with fourth-order regularization for unconstrained optimization,
Escaping Strict Saddle Points of the Moreau Envelope in Nonsmooth Optimization,
Worst-Case Complexity of TRACE with Inexact Subproblem Solutions for Nonconvex Smooth Optimization,
Complexity analysis of interior-point methods for second-order stationary points of nonlinear semidefinite optimization problems,
Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems,
A Newton-like method with mixed factorizations and cubic regularization for unconstrained minimization,
Decentralized nonconvex optimization with guaranteed privacy and accuracy,
A Newton-CG Based Barrier Method for Finding a Second-Order Stationary Point of Nonconvex Conic Optimization with Complexity Guarantees,
A nonlinear conjugate gradient method with complexity guarantees and its application to nonconvex regression,
Convergence Properties of an Objective-Function-Free Optimization Regularization Algorithm, Including an \(\boldsymbol{\mathcal{O}(\epsilon^{-3/2})}\) Complexity Bound,
A Newton-CG Based Augmented Lagrangian Method for Finding a Second-Order Stationary Point of Nonconvex Equality Constrained Optimization with Complexity Guarantees,
Unnamed Item,
On the use of the energy norm in trust-region and adaptive cubic regularization subproblems,
On the worst-case evaluation complexity of non-monotone line search algorithms,
A proximal trust-region method for nonsmooth optimization with inexact function and gradient evaluations,
On High-order Model Regularization for Constrained Optimization,
Newton-type methods for non-convex optimization under inexact Hessian information,
A Trust Region Method for Finding Second-Order Stationarity in Linearly Constrained Nonconvex Optimization,
A Newton-Based Method for Nonconvex Optimization with Fast Evasion of Saddle Points,
Second-Order Guarantees of Distributed Gradient Algorithms,
Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy,
A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis,
Structured nonconvex and nonsmooth optimization: algorithms and iteration complexity analysis,
trlib: a vector-free implementation of the GLTR method for iterative solution of the trust region problem,
ARCq: a new adaptive regularization by cubics,
On Regularization and Active-set Methods with Complexity for Constrained Optimization,
Complexity Analysis of Second-Order Line-Search Algorithms for Smooth Nonconvex Optimization,
Complexity Analysis of a Trust Funnel Algorithm for Equality Constrained Optimization,
Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality,
Accelerated Regularized Newton Methods for Minimizing Composite Convex Functions,
Optimality of orders one to three and beyond: characterization and evaluation complexity in constrained nonconvex optimization,
Regularized Newton Methods for Minimizing Functions with Hölder Continuous Hessians,
A reduced-space line-search method for unconstrained optimization via random descent directions,
Regional complexity analysis of algorithms for nonconvex smooth optimization,
Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization,
A generalized worst-case complexity analysis for non-monotone line searches,
Trust-Region Methods for the Derivative-Free Optimization of Nonsmooth Black-Box Functions,
A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds,
On the Complexity of an Inexact Restoration Method for Constrained Optimization,
A concise second-order complexity analysis for unconstrained optimization using high-order regularized models,
A Newton-CG algorithm with complexity guarantees for smooth unconstrained optimization,
Large-scale unconstrained optimization using separable cubic modeling and matrix-free subspace minimization,
On large-scale unconstrained optimization and arbitrary regularization,
Iteration and evaluation complexity for the minimization of functions whose computation is intrinsically inexact,
Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary,
On complexity and convergence of high-order coordinate descent algorithms for smooth nonconvex box-constrained minimization,
Trust-Region Newton-CG with Strong Second-Order Complexity Guarantees for Nonconvex Optimization,
The Use of Quadratic Regularization with a Cubic Descent Condition for Unconstrained Optimization