Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
DOI10.1007/s10208-021-09499-8OpenAlexW3136297071MaRDI QIDQ2696568
Krishnakumar Balasubramanian, Saeed Ghadimi
Publication date: 14 April 2023
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1809.06474
stochastic optimizationnonconvex optimizationNewton methodcomplexity boundsconditional gradient methodszeroth-order methods
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Derivative-free methods and methods using generalized derivatives (90C56) Newton-type methods (49M15) Stochastic programming (90C15)
Related Items (7)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Adaptive cubic regularisation methods for unconstrained optimization. I: Motivation, convergence and numerical results
- Approximation of functions of few variables in high dimensions
- Adaptive cubic regularisation methods for unconstrained optimization. II: Worst-case function- and derivative-evaluation complexity
- Moment inequalities for sums of dependent random variables under projective conditions
- Estimation of the mean of a multivariate normal distribution
- Introductory lectures on convex optimization. A basic course.
- Conditional gradient type methods for composite nonlinear and stochastic optimization
- Second-order optimality and beyond: characterization and evaluation complexity in convexly constrained nonlinear optimization
- A geometric analysis of phase retrieval
- Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries
- Newton-type methods for non-convex optimization under inexact Hessian information
- Implementable tensor methods in unconstrained convex optimization
- Information based complexity for high dimensional sparse functions
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Complexity of approximation of functions of few variables in high dimensions
- Cubic regularization of Newton method and its global performance
- Lectures on Modern Convex Optimization
- Conditional Gradient Sliding for Convex Optimization
- Newton-Stein Method: An optimization method for GLMs via Stein's Lemma
- Simulation and the Monte Carlo Method
- The Expected Norm of a Sum of Independent Random Matrices: An Elementary Approach
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Introduction to Derivative-Free Optimization
- Some NP-complete problems in quadratic and nonlinear programming
- Accelerated Methods for NonConvex Optimization
- First-Order Methods in Optimization
- Non-convex Optimization for Machine Learning
- Kernel-based methods for bandit convex optimization
- Algorithms for learning sparse additive models with interactions in high dimensions*
- Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
- Minimax-optimal rates for sparse additive models over kernel classes via convex programming
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Compressed sensing
This page was built for publication: Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points