A Stochastic Levenberg--Marquardt Method Using Random Models with Complexity Results
DOI10.1137/20M1366253zbMath1487.49035arXiv1807.02176OpenAlexW3179632780MaRDI QIDQ5075237
Vyacheslav Kungurtsev, C. W. Royer, El Houcine Bergou, Youssef Diouane
Publication date: 10 May 2022
Published in: SIAM/ASA Journal on Uncertainty Quantification (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1807.02176
machine learningLevenberg-Marquardt methodworst-case complexitydata assimilationnonlinear least squaresrandom modelsnoisy functions
Abstract computational complexity for mathematical programming problems (90C60) Derivative-free methods and methods using generalized derivatives (90C56) Numerical methods based on necessary conditions (49M05)
Related Items
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Stochastic derivative-free optimization using a trust region framework
- On the local convergence of a derivative-free algorithm for least-squares minimization
- Computation of sparse low degree interpolating polynomials and their application to derivative-free optimization
- A Levenberg-Marquardt method for large nonlinear least-squares problems with dynamic accuracy in functions and gradients
- Stochastic optimization using a trust-region method and random models
- Convergence and complexity analysis of a Levenberg-Marquardt algorithm for inverse problems
- A derivative-free Gauss-Newton method
- Convergence and evaluation-complexity analysis of a regularized tensor-Newton method for solving nonlinear least-squares problems
- A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
- Trust-Region Methods Without Using Derivatives: Worst Case Complexity and the NonSmooth Case
- Global complexity bound of the Levenberg–Marquardt method
- Ensemble Kalman methods for inverse problems
- On the Evaluation Complexity of Cubic Regularization Methods for Potentially Rank-Deficient Nonlinear Least-Squares Problems and Its Relevance to Constrained Nonlinear Optimization
- Convergence of Trust-Region Methods Based on Probabilistic Models
- Probability and Stochastics
- A Derivative-Free Algorithm for Least-Squares Minimization
- Levenberg--Marquardt Methods Based on Probabilistic Gradient Models and Inexact Subproblem Solution, with Application to Data Assimilation
- Introduction to Derivative-Free Optimization
- An Algorithm for Least-Squares Estimation of Nonlinear Parameters
- Nonlinear least squares — the Levenberg algorithm revisited
- Complexity and global rates of trust-region methods based on probabilistic models
- Optimization Methods for Large-Scale Machine Learning
- Inverse Problem Theory and Methods for Model Parameter Estimation
- Improving the Flexibility and Robustness of Model-based Derivative-free Optimization Solvers
- Tikhonov Regularization within Ensemble Kalman Inversion
- Adaptive regularisation for ensemble Kalman inversion
- A Nonmonotone Matrix-Free Algorithm for Nonlinear Equality-Constrained Least-Squares Problems
- Derivative-free optimization methods
- The ensemble Kalman filter for combined state and parameter estimation
- A method for the solution of certain non-linear problems in least squares
This page was built for publication: A Stochastic Levenberg--Marquardt Method Using Random Models with Complexity Results