Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
From MaRDI portal
Publication:6580002
DOI10.1137/22M1524072MaRDI QIDQ6580002
Kwassi Joseph Dzahini, Stefan M. Wild
Publication date: 29 July 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Nonlinear programming (90C30) Derivative-free methods and methods using generalized derivatives (90C56) Stochastic programming (90C15)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Sharp nonasymptotic bounds on the norm of random matrices with independent entries
- Stochastic optimization using a trust-region method and random models
- Adaptive estimation of a quadratic functional by model selection.
- Stochastic Nelder-Mead simplex method -- a new globally convergent direct search method for simulation optimization
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- Expected complexity analysis of stochastic direct-search
- Constrained stochastic blackbox optimization using a progressive barrier and probabilistic estimates
- Computational Advertising: Techniques for Targeting Relevant Ads
- Estimating Derivatives of Noisy Simulations
- Sparser Johnson-Lindenstrauss Transforms
- Extensions of Lipschitz mappings into a Hilbert space
- Introduction to Derivative-Free Optimization
- The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators
- ASTRO-DF: A Class of Adaptive Sampling Trust-Region Algorithms for Derivative-Free Stochastic Optimization
- Derivative-Free and Blackbox Optimization
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- A Stochastic Levenberg--Marquardt Method Using Random Models with Complexity Results
- Stochastic Trust-Region Methods with Trust-Region Radius Depending on Probabilistic Models
- Benchmarking Derivative-Free Optimization Algorithms
- A Stochastic Line Search Method with Expected Complexity Analysis
- Derivative-free optimization methods
- The Random Matrix Theory of the Classical Compact Groups
- A basic course in probability theory
- Scalable subspace methods for derivative-free nonlinear least-squares optimization
- Direct Search Based on Probabilistic Descent in Reduced Spaces
- Optimization by moving ridge functions: derivative-free optimization for computationally intensive functions
- Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
Related Items (1)
This page was built for publication: Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6580002)