Finding zeros of Hölder metrically subregular mappings via globally convergent Levenberg–Marquardt methods
From MaRDI portal
Publication:5038173
DOI10.1080/10556788.2020.1712602zbMath1501.90069arXiv1812.00818OpenAlexW3000315134WikidataQ126313599 ScholiaQ126313599MaRDI QIDQ5038173
Ronan M. T. Fleming, Phan Tu Vuong, Masoud Ahookhosh
Publication date: 29 September 2022
Published in: Optimization Methods and Software (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1812.00818
global convergencenonlinear equationLevenberg-Marquardt methodsHölder metric subregularitynon-isolated solutionsbiochemical reaction network kineticsworst-case global complexity
Analysis of algorithms and problem complexity (68Q25) Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26)
Related Items
A fast and simple modification of Newton's method avoiding saddle points, A modified inexact Levenberg-Marquardt method with the descent property for solving nonlinear equations, Local convergence of the Levenberg-Marquardt method under Hölder metric subregularity, A Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local Minima
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gradient methods for minimizing composite functions
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- An efficient nonmonotone trust-region method for unconstrained optimization
- Global complexity bound analysis of the Levenberg-Marquardt method for nonsmooth equations and its application to the nonlinear complementarity problem
- On a global complexity bound of the Levenberg-marquardt method
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Levenberg--Marquardt methods with strong local convergence properties for solving nonlinear equations with convex constraints
- New fractional error bounds for polynomial systems with applications to Hölderian stability in optimization and spectral theory of tensors
- On gradients of functions definable in o-minimal structures
- Local behavior of an iterative framework for generalized equations with nonisolated solutions
- Introductory lectures on convex optimization. A basic course.
- Conditions for duality between fluxes and concentrations in biochemical networks
- Accelerating the DC algorithm for smooth functions
- On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption
- Globally convergent algorithms for finding zeros of duplomonotone mappings
- Local convergence of the Levenberg-Marquardt method under Hölder metric subregularity
- Algebraic rules for computing the regularization parameter of the Levenberg-Marquardt method
- An inexact line search approach using modified nonmonotone strategy for unconstrained optimization
- Incorporating nonmonotone strategies into the trust region method for unconstrained optimization
- Cubic regularization of Newton method and its global performance
- A class of nonmonotone Armijo-type line search method for unconstrained optimization
- Convergence of a Regularized Euclidean Residual Algorithm for Nonlinear Least-Squares
- On the Complexity of Steepest Descent, Newton's and Regularized Newton's Methods for Nonconvex Unconstrained Optimization Problems
- Rank-Deficient Nonlinear Least Squares Problems and Subset Selection
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- A Globally Convergent Trust-Region Method for Large-Scale Symmetric Nonlinear Systems
- An inexact Levenberg-Marquardt method for large sparse nonlinear least squres
- Quasi-Newton Methods, Motivation and Theory
- Trust Region Methods
- A Nonmonotone Line Search Technique for Newton’s Method
- Iterative Solution of Nonlinear Equations in Several Variables
- Strong local convergence properties of adaptive regularized methods for nonlinear least squares
- Newton-Type Methods for Optimization and Variational Problems
- Benchmarking optimization software with performance profiles.