An adaptive regularization method in Banach spaces
From MaRDI portal
Publication:6065217
DOI10.1080/10556788.2023.2210253zbMath1528.49022OpenAlexW4379880189MaRDI QIDQ6065217
Unnamed Author, Phillipe L. Toint, Serge Gratton
Publication date: 11 December 2023
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/10556788.2023.2210253
infinite-dimensional problemsnonlinear optimizationadaptive regularizationevaluation complexityHölder gradients
Numerical methods based on necessary conditions (49M05) Numerical methods based on nonlinear programming (49M37) Programming in abstract spaces (90C48) Numerical methods of relaxation type (49M20) Optimality conditions for problems in abstract spaces (49K27)
Cites Work
- On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization
- Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Trust-region and other regularisations of linear least-squares problems
- Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces
- On the use of the energy norm in trust-region and adaptive cubic regularization subproblems
- A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds
- Cubic regularization of Newton method and its global performance
- On the use of iterative methods in cubic regularization for unconstrained optimization
- Reflexivity and the sup of linear functionals
- A cubic regularization algorithm for unconstrained optimization using line search and nonmonotone techniques
- On the Oracle Complexity of First-Order and Derivative-Free Algorithms for Smooth Nonconvex Minimization
- Functional Analysis, Calculus of Variations and Optimal Control
- Global Convergence of a a of Trust-Region Methods for Nonconvex Minimization in Hilbert Space
- A Mesh-Independence Principle for Operator Equations and Their Discretizations
- Quasi-Newton Methods and Unconstrained Optimal Control Problems
- Inequalities in Banach spaces with applications
- Asymptotic Mesh Independence of Newton–Galerkin Methods via a Refined Mysovskii Theorem
- Numerical Optimization
- Superlinear Convergence of Affine-Scaling Interior-Point Newton Methods for Infinite-Dimensional Nonlinear Problems with Pointwise Bounds
- Trust Region Methods
- On High-order Model Regularization for Constrained Optimization
- Universal Regularization Methods: Varying the Power, the Smoothness and the Accuracy
- Mesh Independence for Nonlinear Least Squares Problems with Norm Constraints
- WORST-CASE EVALUATION COMPLEXITY AND OPTIMALITY OF SECOND-ORDER METHODS FOR NONCONVEX SMOOTH OPTIMIZATION
- Sharp Worst-Case Evaluation Complexity Bounds for Arbitrary-Order Nonconvex Optimization with Inexpensive Constraints
- Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization