A zeroth order method for stochastic weakly convex optimization
From MaRDI portal
Publication:2057220
DOI10.1007/s10589-021-00313-3zbMath1481.90236arXiv2002.08083OpenAlexW3198503994MaRDI QIDQ2057220
Publication date: 8 December 2021
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2002.08083
Numerical mathematical programming methods (65K05) Derivative-free methods and methods using generalized derivatives (90C56) Stochastic programming (90C15)
Related Items (2)
A Zeroth-Order Proximal Stochastic Gradient Method for Weakly Convex Stochastic Optimization ⋮ Adaptive sampling quasi-Newton methods for zeroth-order stochastic optimization
Cites Work
- Stochastic derivative-free optimization using a trust region framework
- Stochastic optimization using a trust-region method and random models
- Stochastic mesh adaptive direct search for blackbox optimization using probabilistic estimates
- Random gradient-free minimization of convex functions
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Variational Analysis
- Stochastic Methods for Composite and Weakly Convex Optimization Problems
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Benchmarking Derivative-Free Optimization Algorithms
- Derivative-free optimization methods
- Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Simulation optimization: a review of algorithms and applications
This page was built for publication: A zeroth order method for stochastic weakly convex optimization