Small errors in random zeroth-order optimization are imaginary
From MaRDI portal
Publication:6580001
DOI10.1137/22M1510261zbMATH Open1544.65101MaRDI QIDQ6580001
Author name not available (Why is that?), Daniel Kuhn, Man-Chung Yue
Publication date: 29 July 2024
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Numerical mathematical programming methods (65K05) Derivative-free methods and methods using generalized derivatives (90C56)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- First-order methods of smooth convex optimization with inexact oracle
- Do you trust derivatives or differences?
- Optimal order of accuracy of search algorithms in stochastic optimization
- The complex step approximation to the Fréchet derivative of a matrix function
- On the accuracy of the complex-step-finite-difference method
- Practical mathematical optimization. Basic optimization theory and gradient-based algorithms
- A new one-point residual-feedback oracle for black-box learning and control
- A theoretical and empirical comparison of gradient approximations in derivative-free optimization
- A one-bit, comparison-based gradient estimator
- Improved exploitation of higher order smoothness in derivative-free optimization
- Stochastic online optimization. Single-point and multi-point non-linear multi-armed bandits. Convex and strongly-convex case
- Random gradient-free minimization of convex functions
- Complex-step derivative approximation in noisy environment
- Minimax efficient finite-difference stochastic gradient estimators using black-box function evaluations
- Zeroth-order nonconvex stochastic optimization: handling constraints, high dimensionality, and saddle points
- Numerical computing with IEEE floating point arithmetic. Incl. one theorem, one rule of thumb, and one hundred and one exercises
- Optimization of convex functions with random pursuit
- Introduction to Smooth Manifolds
- Beautiful differentiation
- Julia: A Fresh Approach to Numerical Computing
- Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations
- Using Multicomplex Variables for Automatic Computation of High-Order Derivatives
- When is a Function that Satisfies the Cauchy-Riemann Equations Analytic?
- Smooth Optimization with Approximate Gradient
- Evaluating Derivatives
- Introduction to Derivative-Free Optimization
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Using Complex Variables to Estimate Derivatives of Real Functions
- Derivative-Free and Blackbox Optimization
- Derivative-Free Optimization of Noisy Functions via Quasi-Newton Methods
- Derivative-Free Methods for Policy Optimization: Guarantees for Linear Quadratic Systems
- Global Convergence Rate Analysis of a Generic Line Search Algorithm with Noise
- Finite Difference Gradient Approximation: To Randomize or Not?
- Five stages of accepting constructive mathematics
- Derivative-free optimization methods
- An Optimal Algorithm for Bandit and Zero-Order Convex Optimization with Two-Point Feedback
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- The complex-step derivative approximation
- Numerical Differentiation of Analytic Functions
- A Simplex Method for Function Minimization
- Stochastic Estimation of the Maximum of a Regression Function
- On the numerical performance of finite-difference-based methods for derivative-free optimization
- Stochastic Zeroth-Order Riemannian Derivative Estimation and Optimization
This page was built for publication: Small errors in random zeroth-order optimization are imaginary
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6580001)